Part II: PowerShell Tips & Tricks

This section highlights topics that make your use of PowerShell a little easier, a little friendlier; topics to improve your quality of life as a PowerShell user and developer.

Cross Platform PowerShell: Notes from the Field

For years, PowerShell has been referred to as a thing that can only be used to manage Windows hosts and nothing more. It’s always if linux:bash or if windows:powershell. But with PowerShell Core, you can do more! Wait, did I just rhyme? And in the introduction too. Oh the shame.

Now that it installs on most platforms, that means it all just works like magic and you can just sit back and relax! Right? Well, for the most part, PowerShell Core works everywhere - but like most things, the devil is in the details.

This chapter provides an overview of daily use operations like profiles, prompts, and modules. We then dive into hard-won best practices from the field: covering platform detection, aliases, environment variables, file access and encoding, newlines and case sensitivity, and path handling. It will end with recommendations about using the PowerShell Visual Studio Code extension to ensure your scripts work no matter the platform.

Daily Use

The goal of PowerShell Core is to be as backwards compatible as possible with Windows PowerShell, while also expanding the possible install base to Linux and MacOS platforms. This is a huge effort and there are some spots where PowerShell Core doesn’t hold up to those goals, but for the most part PowerShell Core can be a daily driver.

You will find that the ‘built-in’ PowerShell modules and cmdlets work the same as they did in Windows PowerShell, just improved to handle other platform concerns. In many cases, these cmdlets and modules have benefited from community fixes and contributions that make them faster or easier to use in PowerShell Core than they were in Windows PowerShell.

In other cases, you will find gaps with modules or cmdlets that relied on Windows specific APIs, where either some functionality was removed or entire features absent. This is due to the goal of being cross-platform, things like Windows Management Instrumentation (WMI) or Component Object Model (COM) just couldn’t be ported to other platforms.

Profile Differences

If you are familiar with Windows PowerShell, you know all the different profiles you can have:

1 $profile | Format-List * -force

Output:

1 AllUsersAllHosts       : C:\Program Files\PowerShell\6\profile.ps1
2 AllUsersCurrentHost    : C:\Program Files\PowerShell\6\Microsoft.
3   PowerShell_profile.ps1
4 CurrentUserAllHosts    : C:\Users\james\Documents\PowerShell\profile.ps1
5 CurrentUserCurrentHost : C:\Users\james\Documents\PowerShell\Microsoft.
6   PowerShell_profile.ps1

On Mac and Linux, they’re in slightly different places:

1 $profile | Format-List * -force

Output:

1 AllUsersAllHosts       : /opt/microsoft/powershell/6/profile.ps1
2 AllUsersCurrentHost    : /opt/microsoft/powershell/6/Microsoft.
3   PowerShell_profile.ps1
4 CurrentUserAllHosts    : /home/james/.config/powershell/profile.ps1
5 CurrentUserCurrentHost : /home/james/.config/powershell/Microsoft.
6   PowerShell_profile.ps1

Notice the .config directory. That may seem strange if you aren’t familiar with the X Desktop Group (XDG) Base Directory Specification. The XDG standard defines specifications for file and file formats, and where they should be located. This is common among most macOS and Linux platforms, with some application level support on others. This standard looks familiar if you’ve ever worked with Windows APPDATA files and folders, as it separates configuration files for applications from the data that the applications store on the filesystem. For this scenario, it just means PowerShell will locate its configuration, history, and user data according to XDG specifications.

What does this mean in practice? If you are a Windows PowerShell user coming to PowerShell Core, your profile knowledge can be mapped directly to PowerShell Core profiles on other platforms, with only having to account for slight folder location differences. If you are a *nix user coming to PowerShell, the profile locations should look familiar and not require much effort to adopt. In conclusion, something for everyone.

Module Differences

Where are my Modules? What Modules can I use?

Module Installation Paths

PowerShell Core Modules are located in different directories than in Windows PowerShell. They’re installed to $home/Documents/PowerShell/Modules. PowerShell Core Modules are also located in different directories on non-Windows platforms. PowerShell Module installation locations follow XDG Base Directory specifications. This means that they’re installed to ~/.local/share/powershell/Modules.

Module Loading Differences

Since PowerShell Core installs modules to different locations than Windows PowerShell, it also loads modules differently than Windows PowerShell. It does still use the $env:PSModulePath environment variable, but it’s populated with different values in PowerShell Core.

On Windows:

1 $env:PSModulePath -split ';'

Output:

1 C:\Users\james\Documents\PowerShell\Modules
2 C:\Program Files\PowerShell\Modules
3 c:\program files\powershell\7-preview\Modules
4 C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules

On Mac and Linux, they’re in slightly different places:

1 $env:PSModulePath -split ':'

Output:

1 /home/james/.local/share/powershell/Modules
2 /usr/local/share/powershell/Modules
3 /opt/microsoft/powershell/6/Modules

Note how we had to use the : path separator with $env:PSModulePath instead of the ; separator. We’ll get to how to handle this difference later on in the chapter when we talk about handling paths in Beware Paths.

Using Windows PowerShell Modules in PowerShell Core

So if PowerShell Core installs and stores modules in different locations than Windows PowerShell, how do you use Windows PowerShell Modules in PowerShell Core? Turns out it’s harder than just prepending $env:PSModulePath with a valid Windows PowerShell path and telling PowerShell Core to ignore Windows PowerShell based modules with SkipEditionCheck.

Most Windows PowerShell modules can work in PowerShell Core with minimal to no modification, if they kept to using PowerShell cmdlets or base .NET API. If the Windows PowerShell modules used Windows specific API like COM or WMI, then the module won’t run in PowerShell Core. Those aren’t supported as they couldn’t be run cross-platform. So are we out of luck if we want to continue to use our existing Windows PowerShell Modules?

Enter the WindowsCompatibility Module. This module lets PowerShell Core access Windows PowerShell Modules that aren’t yet natively available on PowerShell Core. How can it do that if the module isn’t supported? Through the power of Implicit Remoting! This feature has been around since PowerShell version 2, but isn’t used as much as it should. It uses PowerShell Remoting to provide wrapper cmdlets and functions that serialize the requests and responses. This allows PowerShell Core to invoke the Windows PowerShell module, without requiring you to do anything differently.

All good right? Well, there are some caveats. This is a solution for Windows platforms only as it uses Windows Remote Management (WinRM), you won’t be able to use this to run Windows PowerShell cmdlets on macOS or Linux platforms. Since it’s a serialized remoting approach, no GUI applications or use of Windows Presentation Foundation (WPF) or Windows Forms are allowed. It also requires the latest version of PowerShell Core version 6.1. Even with all of the caveats, this is a major improvement and lights up Modules that aren’t strictly supported. This is an important bridge to the time when the Module finally becomes PowerShell Core compliant.

Notes From the Field

Ok, we’ve gotten past the shiny new cross-platform stuff and some of the gotchas, what are some things to be aware of using PowerShell Core?

Aliases Aren’t Your Friend

If you’ve been a PowerShell user for any length of time, you’ve used aliases in the terminal and in scripts. You’ve developed opinions on their use and when to use them, and recognized that the community has a large array of opinions on whether or not to use them. PowerShell Core makes this topic even harder because of its multi-platform nature.

Aliases are great in interactive use, when you’re typing fervently at the console trying to make something work. They’re not so great when you’re reading the production deploy script at 4 AM trying to figure out what that three letter abbreviation means. It’s a long held stricture in the community, first echoed by Jeffery Snover, that you should be pithy at the command line and verbose in your script. Jeffery first coined the phrase to describe what PowerShell allows you to do, compared to other programming languages, but in this scenario it’s a good idea to follow.

Why the long preamble? In the case of PowerShell Core, it’s even more important to be deliberate in what you want executed because you aren’t running on ‘just Windows’ anymore. A command like ls doesn’t behave the same way in macOS as it does on Windows. It points to the binary ls which has different parameters and text output. Other aliases conflict with names of standard Unix utilities like cat and curl. In the past cat mapped to Get-Content, which would cause problems when running on Linux when it returns newlines with carriage returns instead of just newlines.

So, what do we do about it? In general, it will be a personal choice, but there is some community discussion on paths forward. RFC #129 has initial discussions on removing or dealing with aliases. #8970 is among many requests asking for decouple/removal.

Platform Variables

Some PowerShell Core cmdlets may have abilities to detect and handle different platforms, but what if you really really need to know what platform you’re running on in order to do decide to do one thing or another? PowerShell Core has you covered with some built-in variables that are automatically populated for each platform it supports. These variables are all boolean, in that they’re true or false depending on what the platform is.

1 PS C:\Users\james> $IsWindows
2 True
3 PS /home/james> $IsLinux
4 True
5 PS C:\Users\james> $IsMaxOs
6 False

You would use these variables to determine which custom logic is needed for your environment. For example you would use these to determine which locations to pull binaries for, or determining which command to run depending on the OS. This avoids having to know how to query Common Information Model (CIM) for the Windows OS or uname for linux, in simple switch or if-else statements.

And easy snippet to keep handy is below. This switch statement will determine which platform you are on, and only run the code in appropriate scriptblock.

1 switch($true){
2   $IsWindows { 'PSCore on Windows!' }
3   $IsLinux   { 'PSCore on Linux!' }
4   $IsMacOS   { 'PSCore on MacOS!' }
5 }

These built-in platform variables won’t work for more complicated scenarios where you need to know which linux distribution you’re running in or whether you’re running on Windows 2012R2 or 2016. When you get to those scenarios, you drop down to the raw query commands for your platform.

Case Sensitivity

Dealing with case sensitivity is a recurring theme when working cross-platform, no matter the language you use. Case sensitivity means differentiating between upper case and lower case characters. For example, ‘Windows’ is different that ‘windows’.

You probably know that Windows is case insensitive and Linux is case sensitive already. What you may not know is that Windows is case insensitive while still preserving case. To further complicate things, in general PowerShell is case insensitive.

All of this means that PowerShell is case insensitive but your platform may not be, and you have hidden bugs waiting to happen! What does this mean for how you use PowerShell Core? It means you have to start accounting for human error or variance in everything from file paths to user input that can depend on character casing. What’s a person to do?

We’ll cover two areas to be most concerned about regarding case sensitivity: Environment variables and File Paths.

Environment Variables

PowerShell tries very hard to ‘Do the Right Thing’, but in the case of Environment Variables there are some things it can’t shield you from regarding case sensitivity. On linux and macOS all environment variables are upper case with exception of PSModulePath. This variable was deemed important enough to break the convention and set as camel-cased no matter the operating system. This allowed scripts to work using PSModulePath no matter the platform it was running on.

File Paths

Again, PowerShell tries very hard to ‘Do the Right Thing’, and in the case of file paths it does a pretty good job of protecting you from implementation details.

Here are some general rules to follow:

  • In general, rely on the PowerShell Core system to validate paths by using the built-in cmdlets.
  • Resolve-Path is your friend when trying to ensure the case in paths is correct no matter the platform.
  • When it doubt, use Test-Path to determine both if a path is present and valid case at the same time.
  • If you have to compare paths as strings yourself for some reason, use case insensitive comparisons like -ieq.
  • Always remember to check your regex statements for case sensitivity.
  • When using the interactive shell, PowerShell Core on Linux won’t tab complete incorrect cases

When to Use Shebangs

A shebang, or hashbang, is a sequence of characters beginning with a number sign and exclamation mark that indicate the file is to be used as if it’s an executable. In general, the characters after the #! are parsed as a path to the thing that will execute the file, and the contents of the file as the code to execute. There are some variants of this workflow, but for most systems that’s how it works.

In PowerShell Core 6.0, support for shebangs was added by changing the first positional parameter for pwsh to -File from -Command. This allowed you to execute pwsh foo.ps1 or pwsh someFile without specifying -File like this pwsh -File foo.ps1.

In order for pwsh foo.ps1 to work, you have to add #!/usr/bin/env pwsh as the first line of your PowerShell script:

1 #!/usr/bin/env pwsh
2 
3 # awesome pwsh code here
4 Write-Host 'awesome'

Let’s break this down a bit, using our definition above. We know that #! indicates we’re using a shebang, that the path to the executable follows it /usr/bin/env, and that the executable to run is last pwsh.

Why use /usr/bin/env? For most platforms, that’s where the symlink to the PowerShell install will be. As a bonus, running it on Windows works with that path too:

1 Microsoft Windows [Version 10.0.18362.239]
2 (c) 2019 Microsoft Corporation. All rights reserved.
3 
4 C:\Users\james>
5 
6 C:\Users\james>pwsh foo.ps1
7 awesome
8 
9 C:\Users\james>

Having a shebang allows you to have a truly cross-platform script that runs anywhere PowerShell Core is installed.

Git Hooks

So, even with all this, when would you want to use shebangs with PowerShell Core? One increasingly common use case is git hooks. Git hooks allow you to run custom scripts whenever certain events occur (committing, merging, pushing, etc.) when using git. When git was written, an unfortunate assumption was made that all git hooks would be bash scripts. This is unfortunate because it restricted the ability to use git hooks on different platforms, as not all platforms come with bash, never mind bash not being prevalent on Windows.

A workaround was to have your git hook call the bundled bash shell inside Git, then spawn PowerShell from there:

1 #!C:/Program\ Files/Git/usr/bin/sh.exe
2 exec powershell.exe -NoProfile -ExecutionPolicy Bypass \
3   -File ".\.git\hooks\pre-commit.ps1"

Since Git and bash doesn’t understand PowerShell, we have to store the PowerShell code in a separate PowerShell script file: .\.git\hooks\pre-commit.ps1.

1 # Verify user's Git config has appropriate email address
2 if ($env:GIT_AUTHOR_EMAIL -notmatch '@(non\.)?acme\.com$') {
3     # super cool powershell code
4     exit 1
5 }
6 exit 0

This is sub-optimal for several reasons. This requires multiple files, both the git hook itself and the PowerShell script. The path to Git’s sh is hardcoded, so it has to be the same every system that runs this, and it relies on powershell.exe to be in PATH, although you could have hardcoded that path too, but that isn’t any better.

With PowerShell Core, there is a cleaner alternative:

1 #!/usr/bin/env pwsh
2 
3 # Verify user's Git config has appropriate email address
4 if ($env:GIT_AUTHOR_EMAIL -notmatch '@(non\.)?acme\.com$') {
5   # super cool pwsh code
6   exit 1
7 }
8 exit 0

With PowerShell Core, we can put the PowerShell code directly in the git hook file, and place a shebang to instruct the system where to find the shell to run the code. In the end, this could prove to be a better solution than bash, as this is truly cross platform code. Install PowerShell core on all of your build and development systems, and the git hooks work no matter where they’re run.

Dealing with Files

File Encoding

When you start to create files that have to be able to be read on multiple platforms, you quickly realize that reading the content of a files is much harder than it sounds.

When creating files UTF-8 is the best choice for cross-platform user, with ASCII being your last best hope. UTF-8 is readable by most modern applications, and handles the most characters in a predictable way. ASCII is the best fallback for interoperability, as it restricts what characters it supports so less things go wrong.

If you set the default parameters for cmdlets that use encoding like

1 $PSDefaultParameterValues["Out-File:Encoding"] = "UTF8"

in your profile or at the beginning of your scripts, you can avoid having to remember to set encoding correctly for cmdlets that use encoding.

File Newlines and Line Endings

The only thing as hotly debated as line endings is tabs vs spaces. We won’t get into value judgments here, just an exploration of what to consider.

There are two choices to use when writing newlines in files: \n and \r\n.

Most Mac and Windows applications will accept \n but Unix shell interpreters will fail to read \r\n. This is especially important in the shebang, as it will prevent the entire file from being read instead of just having weirdly terminated lines in output.

If you are writing scripts that will be read or executed on multiple platforms you will have to start standardizing on Unix-style line feeds. It ensures that all platforms will read and understand your text, and most modern editors allow you to configure linefeed defaults. This, among many other reasons, is a reason to move off of the PowerShell Integrated Scripting Environment (ISE).

File Access

Continuing on the theme, like with the path cmdlets and environment variables, PowerShell Core knows how to handle file access across platforms. For most cases, use the built-in cmdlets for reading and writing files as they will know the variances between OS. For high performance read/write scenarios, you’ll have to drop down to native APIs, but for your typical input/output workloads the content cmdlets work fine.

However, beware the differences in platform support of cmdlets like Get-ChildItem. You will get back filesystem objects, but they won’t have paths you expect if you’re used to Windows Paths. Also be careful of aliases, if you still have ls defined somewhere and a ls -la is used you’ll have a bad time.

Beware Paths - Building them

PowerShell Core has adapted itself to running on platforms that use different path indicators and separators, and so you need to start using the built-in functionality to do the same.

When building paths, use Join-Path and other path cmdlets like Test-Path, Split-Path, and Resolve-Path to do the hard work of knowing what path separator to use on which platform. Don’t build the path strings yourself using string interpolation like "C:\$examplePath\$anotherExamplePath". I can almost guarantee you that you will miss that one corner case where it’s a backslash and not a forward slash. Besides being brittle, using string interpolation makes you do all the work. Why add extra custom logic to detect if it’s linux or windows when a simple Join-Path works the same, is less code, and is reliable?

When coding your scripts, be aware that the platform you’re sitting on may not be the platform the code is run on. Your code has to be path agnostic, so stick to concepts like adding [FileInfo] typed parameters to your function or scripts. Don’t assume the primary drive is C, use environment variables like $env:SystemDrive to determine if it’s not / instead. While we’re discussing default drives, don’t assume all programs are installed in ‘C:Program Files’, use environment variables like $env:ProgramFiles to determine where things are.

Beware Paths Part Deux - Accessing

[System.Environment]::CurrentDirectory isn’t set in .Net Core, so you can’t rely on it for the correct directory.

If you need to call path-sensitive .Net APIs, you need to use

1 [System.IO.Directory]::SetCurrentDirectory(
2     (Get-Location -PSProvider FileSystem).ProviderPath
3 )

first to update the .NET environment. This will set .NET’s location to the same thing that PowerShell thinks is the location. This is similar to the problems that have existed in PowerShell since version 1.

Note - PowerShell sets the working directory correctly when launching applications.

VS Code Config

Visual Studio Code (VSCode) is a great PowerShell editor, and with a few customizations can be an efficient tool to use.

Extensions

First off, install the PowerShell Extension. If you need help installing the extension, this link has resources for the many ways to install the extension. This VSCode extension provides syntax highlighting, intellisense, and many more modern Integrated Development Environment (IDE) features. If you’re a PowerShell ISE fan, it even comes with a PowerShell ISE theme to make you feel at home.

Outside of PowerShell, you should take a look at these other VSCode extensions that will improve your day to day coding experience:

VSCode has something for everyone, it even has something for vim users!

Configuration

VSCode works for PowerShell coding out of the box, but to truly get the most of out your time here are some configuration suggestions for cross-platform use. The examples below are all JSON output, which you can reach inside VSCode by executing Ctrl+Shift+P and typing settings, then choosing Preferences: Open Settings (JSON). You could use the graphic settings editor Preferences: Open Settings (UI) if you prefer.

A general purpose VScode settings looks like the following code block. It uses all the information we’ve covered in this chapter to configures VSCode for cross-platform coding.

 1 {
 2   // Put a line number in the gutter for the last line
 3   "editor.renderFinalNewline": true,
 4   // Show all whitespace. Alternatively could use boundary to still show
 5   // spaces, but not leading spaces
 6   "editor.renderWhitespace": "all",
 7   // Removes trailing whitespace that was added by auto complete
 8   "editor.trimAutoWhitespace": true,
 9   // Use cross-platform compatible UTF8 file encoding
10   "files.encoding": "utf8",
11   // Set newlines to Linux format
12   "files.eol": "\n",
13   // Add an empty line at the end of a file
14   "files.insertFinalNewline": true,
15   // Ensure there is only one empty line at the end of a file
16   "files.trimFinalNewlines": true,
17   // Remove any extra whitespace to make git diffs easier
18   "files.trimTrailingWhitespace": true,
19 }

In summary, the above configures VSCode to use UTF8 encoding, Linux newlines, insert an empty newline at the end of every file, show all whitespace (non character space), and trim extra whitespace and newlines.

If these config items conflict with ones you previous set, you can scope these to the language level in VSCode. This will make VSCode use these settings for PowerShell, while applying the others for other languages.

 1 {
 2   "[powershell]":{
 3     "editor.renderFinalNewline": true,
 4     "editor.renderIndentGuides": true,
 5     "editor.renderLineHighlight": "all",
 6     "editor.renderWhitespace": "all",
 7     "editor.trimAutoWhitespace": true,
 8     "files.encoding": "utf8",
 9     "files.insertFinalNewline": true,
10     "files.trimFinalNewlines": true,
11     "files.trimTrailingWhitespace": true,
12   },
13 }

Notice that we can’t set "files.eol": "\n" in the PowerShell language scoped section. Setting line endings is a global VSCode setting, and can’t be set per language.

The PowerShell VSCode extension can be used without configuration, but just like VSCode, can be optimized to your usage patterns. The following settings ensure that the PowerShell extension will load when a PowerShell file is opened, and set the integrated console to not show on startup. This is a personal preference of mine, as I find the terminal popping open every time a PowerShell file is opened jarring.

1 {
2   "powershell.startAutomatically": true,
3   "powershell.integratedConsole.showOnStartup": false,
4 }

Choosing what version of PowerShell the extension uses when it’s parsing your scripts and in the integrated console used to be difficult. The settings required you to type the paths to the PowerShell binary by hand, and were error prone. You can still do that, or you can click the PowerShell icon in the lower right click corner which will present you with a popup window showing you the different PowerShell versions installed on your system. This results in the following config entry:

1 {
2   "powershell.powerShellExePath":
3     "C:\\Program Files\\PowerShell\\7-preview\\pwsh.exe",
4 }

One great feature of the PowerShell extension is the code formatting feature. This automatically fixes your code to align with PowerShell coding standards. If you find you are disagreeing with some of those choices, there are formatting options to consider:

 1 {
 2   // Use correct casing for cmdlets.
 3   "powershell.codeFormatting.useCorrectCasing": true,
 4   // Adds a space after a separator (',' and ';').
 5   "powershell.codeFormatting.whitespaceAfterSeparator": true,
 6   // Adds spaces before and after an operator ('=', '+', '-', etc.).
 7   "powershell.codeFormatting.whitespaceAroundOperator": true,
 8   // Adds a space before and after the pipeline operator ('|').
 9   "powershell.codeFormatting.whitespaceAroundPipe": true,
10   // Adds a space between a keyword and its associated scriptblock expression.
11   "powershell.codeFormatting.whitespaceBeforeOpenBrace": true,
12   // Adds a space between a keyword and its associated conditional expression.
13   "powershell.codeFormatting.whitespaceBeforeOpenParen": true,
14   // Adds a space after an opening brace and before a closing brace
15   "powershell.codeFormatting.whitespaceInsideBrace": true,
16 }

The "powershell.codeFormatting.useCorrectCasing" setting is an interesting feature. It’s disabled by default, but once enabled with automatically convert all of the references to cmdlet to their proper case and full name!

Another useful feature is the real-time script analysis from the PowerShell Script Analyzer. You can set the path to a version you have installed, so you can control what version your project uses.

1 {
2   // Enables real-time script analysis from PowerShell Script Analyzer.
3   "powershell.scriptAnalysis.enable": true,
4 
5   // Specifies the path to a PowerShell Script Analyzer settings file
6   "powershell.scriptAnalysis.settingsPath": "",
7 }

Wrap-up

In this chapter we’ve covered PowerShell Core daily use operations and some hard-won notes from field. Through it all, we’ve examined how to do things with an eye for how to use PowerShell Core on different platforms. While whether you can use PowerShell Core as your daily use shell largely is determined by your use cases, I think what we’ve covered here shows that PowerShell Core is definitely a shell and language that’s successfully designed for use cross-platform.

PowerShell Development on Containers (PowerShell Core)

With PowerShell V7 on the verge, the importance of building solutions that are cross-platform in PowerShell is greater than ever. One issue running tests is that your local PowerShell environment isn’t “clean.”

You install modules and software and change configurations that your target audience or servers won’t share. Your solution can act differently in those environments, and you also risk not understanding the full dependencies list and limitation of a target environment. That’s why you need a clean PowerShell environment.

Sometimes you may just want to try the new PowerShell versions or maybe test how it’s affecting your tools before making the switch. While installing a new server, whether virtual or physical is a time-consuming task, running a clean PowerShell core in a container can take two minutes or less.

What Containers Are

Containers provide the ability to run an application in an isolated environment. The isolation and security of containers allow you to run many containers simultaneously on your PC. Containers are lightweight because they don’t require the extra load of a hypervisor and run directly within your machine’s kernel. This means you can run more containers on a given hardware combination than if you were using virtual machines.

Prerequisites

Before you can run containers, you need a containers engine. The most popular one is Docker. However, you can’t use Docker on Windows 10 home edition due to Hyper-V limitations.

Docker for Windows

  • Windows 10 64-bit: Pro, Enterprise or Education (1607 Anniversary Update, Build 14393 or later).
  • Virtualization enabled in BIOS. Typically, virtualization is enabled by default. This is different from having Hyper-V enabled.
  • CPU SLAT-capable feature.
  • At least 4 GB of RAM.

Docker For Linux

  • A 64-bit installation
  • Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for your platform is recommended.
  • iptables version 1.4 or higher
  • git version 1.7 or higher
  • A ps executable, usually provided by procps or a similar package.
  • XZ Utils 4.9 or higher
  • A mounted cgroupfs hierarchy; a single, all-encompassing cgroup mount point isn’t sufficient.

If you meet the prerequisites, install the Docker engine from the Docker site.

Starting with Docker

Docker uses images to build containers. A container is a runnable instance of an image. An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. In our example, the PowerShell image is based on Linux, with PowerShell installed.

 1 PS /> $PSVersionTable
 2 
 3 Name                           Value
 4 ----                           -----
 5 PSVersion                      6.2.1
 6 PSEdition                      Core
 7 GitCommitId                    6.2.1
 8 OS                             Linux 4.9.125-linuxkit
 9 Platform                       Unix
10 PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0}
11 PSRemotingProtocolVersion      2.3
12 SerializationVersion           1.1.0.1
13 WSManStackVersion              3.0

You can create, start, stop, move, or delete a container. You also can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state. A container is defined by its image and any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that aren’t stored in persistent storage disappear.

Pull Docker Image

Before you can create any containers, you need to pull the image you want to use from the Docker hub repository. Each image link is built by two parts:image URL:tag. Image tag defines the version or configuration variation of the image. You can find more info about them in the image repository source. Begin with opening PowerShell and use the Docker pull command to get the image you want:

Latest Stable Edition of PowerShell
1 Docker pull mcr.microsoft.com/powershell:latest
Latest Preview Edition of PowerShell
1 docker pull mcr.microsoft.com/powershell:preview

You can choose what best fits your needs. When the Docker engine finishes downloading the image, you can move to the next step.

Creating a Container Based on PowerShell Image

Now, you can create containers based on the image and image tag you pulled. Create a new container using the Docker run command:

1 Docker run --name ps-core --interactive --tty mcr.microsoft.com/powershell:latest

Because you used the --interactive --tty switches, when the command executes, the session is switched to the container session context.

Managing Existing Containers

Finally, when you finish your tests, you need to decide what to do with the container. The container is persistent, which means you can continue work on it, even if its stopped. All the data will stay until you delete the container. Do you want to keep it for future purpose? Or you don’t need it anymore and want to delete it?

Delete the Container

If you don’t need the container anymore, you can remove it. Pass the ID or the name of the container to the Docker rm command:

1 Docker rm 'Container name'

If you don’t remember the container ID or Name, you can get it with the command Docker ps.

1 Docker ps

Keep The Container

If You decide to keep the container, you can re-use it at any time. First, you need to make sure the container is running. You can use the Docker ps command to check it. If it’s not running, you can start it with the command Docker start.

1 Docker start 'Container Name'

Now to actively interact with it, you use the Docker attach command.

1 Docker attach 'Container Name'

Use Visual Studio Code to Connect and Develop on a Container

Now you know how to create containers with PowerShell core inside. But that PowerShell session isn’t a development environment. You want to properly develop in an environment that supports development operations like debugging, task running, version control, and more.

The Remote—Containers Extension

The Remote—Containers extension lets you attach VS Code to a running container. The extension lets you work with VS Code as if everything were running locally on your machine, except now they’re isolated inside a container. To install the Remote-Containers extension, open the Extensions view by pressing Ctrl+Shift+X and search for “Remote-Containers” to filter the results.

Attach VS Code to a Container

You can attach VS Code to a container in three steps:

  1. Press F1 to open the command pallet.
  2. Type “Remote-containers: Attach To Running Container…”
  3. Choose the Container you want to attach to.

Now you’re connected to the container. You can run your scripts, modules, or any other solution on a clean and isolated environment.

The Docker Extension

The Docker extension makes it easy to build, manage, and deploy containerized applications from VS Code. To install the Docker extension, open the Extensions view by pressing Ctrl+Shift+X and search for “Docker” to filter the results. After installing the extension, you can add it to the activity bar by right-clicking on the activity bar and choosing Docker. You can use it to manage your containers, Images, and more.

Manage Containers

In the Docker view, the first box is used for containers. In this box, you can see all the containers available in your PC. Each container has a status icon next to it. Green Play is for running containers and Red Stop is for stopped Containers. By right clicking on a container, you can Attach, Start, Stop, Restart, or remove it.

Manage Images

Under the Images box, you can see which images are available on your PC. Those are the images you can use to create containers. You can expand each image to see which tags you pulled. By right-clicking on the tag of the images, you can remove it, or run a temporary instance container.

Summary

The combined abilities of VS Code as a development environment and the Docker containers as a test environment are endless. When one test environment fails, a new one rises. You may find a new world of possibilities with containers. You can try the newest preview version of PowerShell on containers. You can also try different modules from the internet, without messing up your environment. Most importantly, you can develop and test your solutions much better. Hopefully this chapter will help you start with containers and help you test your solutions better.