PowerShell on Linux? A primer on Object-Shells

Photos by NOAA and Cedric Fox on Unsplash

In the previous post, Install PowerShell on Fedora Linux, we went through different ways to install PowerShell on Fedora Linux and explained the basics of PowerShell. This post gives you an overview of PowerShell and a comparison to POSIX-compliant shells.

Table of contents

Differences at first glance — Usability

One of the very first differences to take note of when using PowerShell for the first time is semantic clarity.

Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing.

Commands like awk, ps, top or even ls do not communicate what they do with their name. Only when one already does know what they do, do the names start to make sense. Once I know that ls lists files the abbreviation makes sense.

In PowerShell on the other hand, commands are perfectly self-descriptive. They accomplish this by following a strict naming convention.

Commands in PowerShell are called “cmdlets” (pronounced commandlets). These always follow the scheme of Verb-Noun.

One example: To get all files or child-items in a directory I tell PowerShell like this:

PS > Get-ChildItem

    Directory: /home/Ozymandias42

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----          14/04/2021    08:11                Folder1
d----          13/04/2021    11:55                Folder2

An Aside:
The cmdlet name is Get-ChildItem not Items. This is in acknowledgement of Set-theory. Each of the standard cmdlets return a list or a set of results. The number of items in a set —mathematicians call this the sets cardinality— can be 0, 1 or any arbitrary natural number, meaning the set can be empty, contain exactly one result or many results. The reason for this, and why I stress this here, is because the standard cmdlets also implicitly implement a ForEach-Loop for any results they return. More about this later.

Speed and efficiency


You might have noticed that standard cmdlets are long and can therefore be time consuming when writing scripts. However, many cmdlets are aliased and don’t necessarily depend on the case, which mitigates this problem.

Let’s write a script with unaliased cmdlets as an example:

PS > Get-Process | ForEach-Object {Write-Host $_.Name -ForegroundColor Cyan}

This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Let’s shorten them and replace upper case letters to make the script easier to type:

PS > gps | foreach {write-host $_.name -foregroundcolor cyan}

This is the same script but with greatly simplified input.

To see the full list of aliased cmdlets, type Get-Alias.

Custom aliases

Just like any other shell, PowerShell also lets you set your own aliases by using the Set-Alias cmdlet. Let’s alias Write-Host to something simpler so we can make the same script even easier to type:

PS > Set-Alias -Name wh -Value Write-Host

Here, we aliased wh to Write-Host to increase typebility. When setting aliases, -Name indicates what you want the alias to be and -Value indicates what you want to alias to.

Let’s see how it looks now:

PS > gps | foreach {wh $_.name -foregroundcolor cyan}

You can see that we already made the script easier to type. If we wanted, we could also alias ForEach-Object to fe, but you get the gist.

If you want to see the properties of an alias, you can type Get-Alias. Let’s check the properties of the alias wh using the Get-Alias cmdlet:

PS > Get-Alias wh

CommandType     Name                  Version    Source
-----------     ----                  -------    ------
Alias           wh -> Write-Host                       

Autocompletion and suggestions

PowerShell suggests cmdlets or flags when you press the Tab key twice, by default. If there is nothing to suggest, PowerShell automatically completes to the cmdlet.

Differences between POSIX Shells — Char-stream vs. Object-stream

Any scripting will eventually string commands together via pipe | and soon come to notice a few key differences.

In bash what is moved from one command to the next through a pipe is just a string of characters. However, in PowerShell this is not the case.

In PowerShell, every cmdlet is aware of data structures and objects. For example, a structure like this:


This data is kept as-is even if a command, used alone, would have presented this data as follows:

AuthorNr.  AuthorName
1          Ozy
2          Skelly

In bash, on the other hand, that formatted output would need to be created by parsing with helper tools like awk or cut first, to be usable with a different command.

PowerShell does not require this parsing since the underlying structure is sent when using a pipe rather than the formatted output shown without. So the command authorObject | doThingsWithSingleAuthor firstAuthor is possible.

The following examples shall further illustrate this.

Beware: This will get fairly technical and verbose. Skip if satisfied already.

A few of the most often used constructs to illustrate the advantage of PowerShell over bash, when using pipes, are to:

  • filter for something
  • format output
  • sort output

When implementing these in bash there are a few things that will re-occur time and time again.
The following sections will exemplarise these constructs and their variants in bash and contrast them with their PowerShell equivalents.

To filter for something

Let’s say you want to see all processes matching the name ssh-agent.
In human thinking terms you know what you want.

  1. Get all processes
  2. Filter for all processes that match our criteria
  3. Print those processes

To apply this in bash we could do it in two ways.

The first one, which most people who are comfortable with bash might use is this one:

$ ps -p $(pgrep ssh-agent)

At first glance this is straight forward. ps get’s all processes and the -p flag tells it to filter for a given list of pids.
What the veteran bash user might forget here however is that this might read this way but is not actually run as such. There’s a tiny but important little thing called the order of evaluation.

$() is a subshell. A subshell is run, or evaluated, first. This means the list of pids to filter again is first and the result is then returned in place of the subshell for the waiting outer command ps to use.

This means it is written as:

  1. Print processes
  2. Filter Processes

but evaluated the other way around. It also implicitly combines the original steps 2. and 3.

A less often used variant that more closely matches the human thought pattern and evaluation order is:

$ pgrep ssh-agent | xargs ps

The second one still combines two steps, the steps 1. and 2. but follows the evaluation logic a human would think of.

The reason this variant is less used is that ominous xargs command. What this basically does is to append all lines of output from the previous command as a single long line of arguments to the command followed by it. In this case ps.

This is necessary because pgrep produces output like this:

$ pgrep bash

When used in conjunction with a subshell ps, might not care about this but when using pipes to approximate the human evaluation order this becomes a problem.

What xargs does, is to reduce the following construct to a single command:

$ for i in $(pgrep ssh-agent); do ps $i ; done

Okay. Now we have talked a LOT about evaluation order and how to do it in bash in different ways with different evaluation orders of the three basic steps we outlined.

So with this much preparation, how does PowerShell handle it?

PS > Get-Process | Where-Object Name -Match ssh-agent

Completely self-descriptive and follows the evaluation order of the steps we outlined perfectly. Also do take note of the absence of xargs or any explicit for-loop.

As mentioned in our aside a few hundred words back, the standard cmdlets all implement ForEach internally and do it implicitly when piped input in list-form.

Output formatting

This is where PowerShell really shines. Consider a simple example to see how it’s done in bash first. Say we want to list all files in a directory sorted by size from the biggest to the smallest and listed as a table with filename, size and creation date. Also let’s say we have some files with long filenames in there and want to make sure we get the full filename no matter how big our terminal.

Field separators, column-counting and sorting

Now the first obvious step is to run ls with the -l flag to get a list with not just the filenames but the creation date and the file sizes we need to sort against too.

We will get a more verbose output than we need. Like this one:

$ ls -l
total 148692
-rwxr-xr-x 1 root root      51984 May 16  2020 [
-rwxr-xr-x 1 root root     283728 May  7 18:13 appdata2solv
lrwxrwxrwx 1 root root          6 May 16  2020 apropos -> whatis
-rwxr-xr-x 1 root root      35608 May 16  2020 arch
-rwxr-xr-x 1 root root      14784 May 16  2020 asn1Coding
-rwxr-xr-x 1 root root      18928 May 16  2020 asn1Decoding
[not needed] [not needed]

What is apparent is, that to get the kind of output we want we have to get rid of the fields marked [not needed] in the above example but that’s not the only thing needing work. We also need to sort the output so that the biggest file is the first in the list, meaning reverse sort…

This, of course, can be done in multiple ways but it only shows again, how convoluted bash scripts can get.

We can either sort with the ls tool directly by using the -r flag for reverse sort, and the –sort=size flag for sort by size, or we can pipe the whole thing to sort and supply that with the -n flag for numeric sort and the -k 5 flag to sort by the fifth column.

Wait! fifth ? Yes. Because this too we would have to know. sort, by default, uses spaces as field separators, meaning in the tabular output of ls -l the numbers representing the size is the 5th field.

Getting rid of fields and formatting a nice table

To get rid of the remaining fields, we once again have multiple options. The most straightforward option, and most likely to be known, is probably cut. This is one of the few UNIX commands that is self-descriptive, even if it’s just because of the natural brevity of it’s associated verb. So we pipe our results, up to now, into cut and tell it to only output the columns we want and how they are separated from each other.

cut -f5- -d” “ will output from the fifth field to the end. This will get rid of the first columns.

   283728 May  7 18:13 appdata2solv
    51984 May 16  2020 [
    35608 May 16  2020 arch
    14784 May 16  2020 asn1Coding
        6 May 16  2020 apropos -> whatis

This is till far from how we wanted it. First of all the filename is in the last column and then the filesize is in the Human unfriendly format of blocks instead of KB, MB, GB and so on. Of course we could fix that too in various ways at various points in our already long pipeline.

All of this makes it clear that transforming the output of traditional UNIX commands is quite complicated and can often be done at multiple points in the pipeline.

How it’s done in PowerShell

PS > Get-ChildItem  
| Sort-Object Length -Descending 
| Format-Table -AutoSize 
    @{Name="Size"; Expression=
        {[math]::Round($_.Length/1MB,2).toString()+" MB"} 
#Reformatted over multiple lines for better readability.

The only actual output transformation being done here is the conversion and rounding of bytes to megabytes for better human readability. This also is one of the only real weaknesses of PowerShell, that it lacks a simple mechanism to get human readable filesizes.

That part aside it’s clear, that Format-Table allows you to simply list the columns wanted by their names in the order you want them.

This works because of the aforementioned object-nature of piped data-streams in PowerShell. There is no need to cut apart strings by delimiters.

Remote Administration with PowerShell — PowerShell-Sessions on Linux!?


Remote administration via PowerShell on Windows has traditionally always been done via Windows Remoting, using the WinRM protocol.

With the release of Windows 10, Microsoft has also offered a Windows native OpenSSH Server and Client.

Using the SSH Server alone on Windows provides the user a CMD prompt unless the default system Shell is changed via a registry key.

A more elegant option is to make use of the Subsystem facility in sshd_config. This makes it possible to configure arbitrary binaries as remote-callable subsystems instead of the globally configured default shell.

By default there is usually one already there. The sftp subsystem.

To make PowerShell available as Subsystem one simply needs to add it like so:

Subsystem powershell /usr/bin/pwsh -sshs --noprofile --nologo

This works —with the correct paths of course— on all OS’ PowerShell Core is available for. So that means Windows, Linux, and macOS.

What this is good for

It is now possible to open a PowerShell (Remote) Session to a properly configured SSH-enabled Server by doing this:

PS > Enter-PSSession 
    -HostName <target-HostName-or-IP> 
    -User <targetUser> 
    -IdentityFilePath <path-to-id_rsa-file>

What this does is to register and enter an interactive PSSession with the Remote-Host. By itself this has no functional difference from a normal SSH-session. It does, however, allow for things like running scripts from a local host on remote machines via other cmdlets that utilise the same subsystem.

One such example is the Invoke-Command cmdlet. This becomes especially useful, given that Invoke-Command has the -AsJob flag.

What this enables is running local scripts as batchjobs on multiple remote servers while using the local Job-manager to get feedback about when the jobs have finished on the remote machines.

While it is possible to run local scripts via ssh on remote hosts it is not as straight forward to view their progress and it gets outright hacky to run local scripts remotely. We refrain from giving examples here, for brevity’s sake.

With PowerShell, however, this can be as easy as this:

$listOfRemoteHosts | Invoke-Command 
    -HostName $_ 
    -FilePath /home/Ozymandias42/Script2Run-Remotely.ps1 

Overview of the running tasks is available by doing this:

PS > Get-Job

Id     Name            PSJobTypeName   State         HasMoreData     Location             Command
--     ----            -------------   -----         -----------     --------             -------
1      Job1            BackgroundJob   Running       True            localhost            Microsoft.PowerShe…

Jobs can then be attached to again, should they require manual intervention, by doing Receive-Job <JobName-or-JobNumber>.


In conclusion, PowerShell applies a fundamentally different philosophy behind its syntax in comparison to standard POSIX shells like bash. Of course, for bash, it’s historically rooted in the limitations of the original UNIX. PowerShell provides better semantic clarity with its cmdlets and outputs which means better understandability for humans, hence easier to use and learn. PowerShell also provides aliased cmdlets in the case of unaliased cmdlets being too long. The main difference is that PowerShell is object-oriented, leading to elimination of input-output parsing. This allows PowerShell scripts to be more concise.

Fedora Project community


  1. Ernest

    you talk about abbreviation and memorization like it’s a bad thing that power shell has “fixed” and then immediately introduce cmdlets or whatever. That’s when I stopped reading.

    • John Desant

      I Think the point is that the cmdlets are self descriptive.

      Gci is short for get-childitem
      Gwmi is short for get-wmiobject

      If you use cmds a lot you will want to know the abbreviations but it’s not necessary.

      You can also do other tricks like shortening where-object and for-each

      Gcim win32_product | ? { $.identifyingNumber -like “12345” } | % { write-host -foregroundcolor blue $.Name }

    • Darvond

      It does seem strangely self-defeating; by introducing aliases, you’ve chicken and egged your way back to bash which already solves the issue. On one hand, you have something incredibly mature like shell scripting; approaching 50 years of history starting from the Thomson Shell.

      On the other hand, you have Powershell, which not even Microsoft Gurus wanted to use which is needlessly wordy and comes with a bunch of caveats.

      • Brian


        PowerShell is definitely wordy, but there is a pattern of logic to the naming conventions of commands. It becomes intuitive to guess what something is/does without having ever used it before. There is an approved list of verbs that PowerShell commands which usage is encouraged specifically for this purpose. This concept simply doesn’t exist in Bash.

        Who cares if Microsoft “gurus” didn’t want to use it? The fact of the matter is that if you value that concept over the verbosity of the syntax, you gain an advantage that you wouldn’t have in Bash. That doesn’t change simply because of some ignorant curmudgeons. Ironically, you sound quite similar to that type yourself.

        By the way, my comments should not be misconstrued to seem like I think PowerShell is “better” than Bash or other traditional shells, but it’s

        • Jesse

          “It becomes intuitive to guess what something is/does without having ever used it before.”

          Guessing is NEVER intuitive.

  2. Ben

    Interesting write-up, as a long time *nix user I indeed prefer the shorthand commands, but agree it’s hard to translate intent to commands without the initial learning curve. That said, you can easily define long semantic aliases to shorthand commands in bash as well.
    The built-in structured data formatting is certainly nice, as is set based paradigm, consider me pleasantly surprised. I think it shows that PowerShell is designed in a fairly short time span, with all the user experience from decades of *sh usage, whereas *nix shells and the GNU commands are slowly grown over time, burdened by, for example, backward compatibility, POSIX, .. .

    In my experience, GNU tools and the shell are supremely powerful, but I also know that I prefer solving more advanced problems in higher languages (Python), if possible with typing support (Julia), in part because I know shell script is very easy to get wrong, in a subtle but future crippling way. I don’t think the addition of PowerShell changes that trade-off, for me.

    In any case, thank you for the clear and detailed article

  3. Bob

    *nix already has a “PowerShell.” It’s called (k|c|ba)sh. Yes, it takes time to memorize all those commands, just like it takes time to learn all the overly descriptive, long to type, commands in PS. Why in the world should I spend(waste?) time learning those, when bash does everything I need it to do?

    • Volker Krause

      Maybe you shouldn’t. I for example have to deal a lot with remoting and structured data and for me Powershell is a blaze.

  4. Objects change everything.

    • Daimon Sewell

      Do they though…
      printf ‘{ “key”: { “nest”: “value” } }’ | jq ‘.key.nest’
      It’s pretty easy to achieve in bash. And you can use Yaml, json, xml etc. String is a very encompassing format.

      • Brian

        I love jq, and I use it often, but the difference is that PowerShell actually takes input and outputs AS objects. Not everything in Bash is output in JSON (in fact, very few is). In PowerShell, full objects are piped.

        Also, JSON is merely a data structure, not a full-fledged object in the sense of an object-oriented language. In some languages, it’s kind of like the difference between a struct and a class. JSON is like a struct, whereas a PSObject is a class that can have constructors, destructors, methods, etc.

        Since all the inputs and outputs inherit from PSObject, you not only get the built-in PSObject functionality, but you can extend functionality of objects, you can create your own object types, and give these objects methods to operate on its own data. It’s pretty powerful.

        • Arouene

          “Not everything in Bash is output in JSON”… Well not everything in PowerShell works on Linux, like getting the IP, how do you get network information like the IP of the Linux machine in an object if Get-NetIPAddress doesn’t work ?

        • Strategist

          As a long time user of *nix with good experience of OOP languages, if any need of objects arises, its better to go with programming languages… u could use very beautifully written classes and stilldefine and use custom objects. Why would one want to use PS, unless he/she is using windows and want to use something native..

  5. Lockheed

    Thanks very much for this. being just a regular user I really much enjoyed it as I had not much to do with PowerShell yet aside from being aware that it exists. Now many more things are clear to me.

  6. Markus

    Don’t forget that using powershell for Linux you have to use full path to commands like wget as powershell has alias wget that is using Invoke-WebRequest which do no’t compare with the functionality of wget/curl.

    Most of the microsoft wizards that I know rather uses bash than powershell as they feel it’s illogical and you always need to use a search engine to find how to do things.

    IMHO powershell hasn’t brought much to the table and you can do everything in bash with a lot less rows of code than with powershell, so just wait for a proper object based shell, it may or may not come.

  7. Pefoo

    My two cents.
    Powershell comes with quite a lot default alias definitions. Some of them are a obviously made for people coming from bash and the like. Get-Childitem has a ls alias for example. Furthermore there are quite hard to read aliases for people that like obfuscation. Foreach-object has a shorthand: %
    None of these alias definitions or shorthands should be used in a script however. For the obvious reason of readability.

    About the object oriented nature of powershell.
    This sometimes comes in handy. You article clearly focuses on the good things about it.
    However, sometimes it’s just a mess. Trying to filter output just like you would with grep is just a big pain. Furthermore it makes it harder to integrate non powershell tools. For interoperability the focus on plain processing is better (my opinion of course, no offense).

  8. Edward Campbell

    If a new shell is your bag, go for it. But please don’t remove ksh or bash. For objects I much prefer Perl or python.

  9. Darvond

    Alright, I’ll admit that I applaud that the writers went out of their way to actually explain all this.

    But as the other commentators have said, there’s a certain elegance that gets lost when using Powershell’s “clearer” command structure that most could just replicate in Py.

    The time it takes to type in these descriptive commands is indeed time that could be spent glancing at a cheatsheet or reference.

    grep awk cat foo; all powerful tools that are living code fossils (this isn’t bad) and remain in use to this very day.

  10. myself

    quick summary with 3 quotes

    “Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing”

    “In PowerShell on the other hand, commands are perfectly self-descriptive. ”

    “This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Let’s shorten them and replace upper case letters to make the script easier to type:”

    Let’s shorten them

    Nothing to add.

  11. Mike Lutta

    Thanks for the article. I know I’m just getting my feet wet and I learnt something about both PowerShell and Bash. That iterators are built into standard cmdlets is something I’ve learnt today.
    One small thing. The line ‘$() is d a subshell…’ Is that a typo or an inadvertent insertion?

  12. tablepc

    First thank you! I’ve been waiting & hoping for something more straight forward than bash. I’m by no means a bash expert. I only learn and use it as necessary because I find it to be a painful experience.

    Can Power Shell make things like this more straight forward and easier to understand for later reading? If so. please show me how. After going through making something like this, I find that if I need to change it later, figuring out how it works takes a while. I know the basic structure of the gsettings commands won’t change, and I find them to be quite easy to use. I’m curious about how the command interfaces, the pipe and sed would work and look differently.

    gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:$(gsettings get org.gnome.Terminal.ProfilesList default | sed -e s/^\’// -e s/\’$//)/ use-system-font false

    • Stephen

      Fish is a good shell alternative to bash for those who would like an easier shell to use. It is available on Fedora Linux (dnf install fish).

  13. John

    Thank you for the write-up, it was very helpful.

    That said, I would point out that the description provided of performing things in bash confuses shell commands with programs. Posix shells have constructs built into them, like for looping, but things like “ls” and “ps” have nothing to do with the shell – those are just programs like any other (including the shell itself). As for naming conventions, I can write a program and name the executable “List-All-Executables-In-A-Purple-Font”… again, that has nothing to do with the shell.

    It is not just a difference of semantics… this is the power and elegance of shell programming and the *nix environment in general… extensibility.

  14. bohrasdf

    I don’t really understand… if powershell work with Objects, then what am I doing when I pipe in/out text? How would powershell deal with text files since they do not fit in the data structure powershell support? And does powershell have envs? What about job control and trap? Can I exec into a process?
    If I want to adopt powershell, I think it will take some time to find all those alternatives

  15. Ferdinando Simonetti

    I’d like to point out that “ps”, “grep”, “awk”, “jq” and so on… are not included in Bash. You can use them through Bash.
    Two cents from a 20+ year Linux/Unix sysadmin, recently exposed to Powershell, and having witnessed some of its capabilities.

Comments are Closed

The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat. Fedora Magazine aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. The Fedora logo is a trademark of Red Hat, Inc. Terms and Conditions