Part I: Systems Management
This section highlights using PowerShell and adjacent tools to manage systems - conform their configuration, spin them up, and otherwise make them do the things we need them to.
Automate Patching With PoshWSUS and PowerShell Scheduled Jobs
Introduction
Hello there! If you’re a sysadmin, chances are you have experienced both the joy and pain (but mostly pain) of a wonderful Microsoft product known as Windows Server Update Services or WSUS. My current role involves heavy use of WSUS, patching fleets of servers and workstations. That requires short maintenance windows and a number of tasks that must be performed before reboots. When you have more than one customer environment, being able to perform those tasks automatically, uniformly, and reliably is important. I want to show you how to make working with WSUS a little less painful.
Let’s get started!
PoshWSUS
PoshWSUS is a PowerShell module created by Boe Prox that manages WSUS. This module contains a plethora of useful cmdlets that allow you to manage & maintain clients and Windows Updates. There’s even a cmdlet to help you clean up WSUS.
PoshWSUS gives you a little more control than the built-in WSUS cmdlets available in Windows Server 2016 and above do.
PowerShell Scheduled Jobs
A scheduled job is a background job. Starting with PowerShell 3.0, a new module called PSScheduledJob was introduced with an additional 16 cmdlets for managing these jobs. You only need to know when, how often, and what command (a script) to run. When you have specific maintenance windows and a litany of maintenance tasks to complete, scheduled jobs can enhance your efficiency in performing those tasks.
Example Scenario
You have a WSUS server and configured computer groups based on your three target groups: Production (PROD), User Acceptance Testing, (UAT), and Development (DEV). To add a little more complication, you also have Primary (PRI) and Secondary (SEC) servers as subgroups within each target group. On top of all this, each target group has a strict maintenance window and can’t be done at the same time. Conservatively, you plan out the monthly patch cycle for each environment to occur on the weekends. Developers want patches to be deployed on Saturday between 8 AM and 12 PM. UAT testing can only be performed Monday through Tuesday during normal business hours of 9 AM to 5 PM. The customer production environment must be patched between 7 AM and 11 PM on the last Sunday of each month according to the Service Level Agreement (SLA).
Because you don’t like working weekends and have some PowerShell knowledge, you decide to build some scripts that will do this task for you. The only manual thing you will do is schedule a maintenance window in your monitoring solution to prevent extraneous alert emails during these times.
Getting Started
The following code example will download the latest version from the https://powershellgallery.com/ repository.
1 # It's good practice to put modules in your user context, not the system's.
2 Install-Module -Name PoshWSUS -Scope CurrentUser
Alternatively, you can also clone from the source using Git:
1 # Places the module directly in your user context
2 Set-Location -Path "C:\users\$env:USERNAME\Documents\WindowsPowerShell\Modules\"
3
4 git clone https://github.com/proxb/PoshWSUS/
Once installed, verify that the module installed correctly and can be imported.
1 <#
2 If you cloned from the source directly, you may be able to skip this part
3 as long as you restarted your console session.
4 #>
5 Import-Module -Name PoshWSUS
6
7 # Show the possibilities!
8 Get-Command -Module PoshWSUS -All
Now you are ready to begin working with WSUS through the PoshWSUS module.
Connect And Poke Around
Now that you have the correct module loaded, you need to connect to your WSUS instance.
To do so you’ll need the Connect-PSWSUSServercmdlet.
1 Get-Help Connect-PSWSUSServer -ShowWindow
The two parameters you will need are -WsusServer and the -Port WSUS is operating on.
By default, WSUS uses HTTP 8530 and for HTTPS/SSL 8531.
If your WSUS instance is different, please consult docs.microsoft.com for more information.
Now that you know how to connect to your WSUS instance using the Connect-PSWSUSServer cmdlet and a few parameters, consider the following example.
Here the lab server’s Fully Qualified Domain Name (FQDN) is used and the default port of 8530.
The -Verbose parameter issued only to show you the connection messages:
1 Connect-PSWSUSServer -WsusServer 'WSUS.kindlelab.int' -Port 8530 -Verbose
Output:
1 VERBOSE: Connecting to WSUS.kindlelab.int <8530>
2
3 Name Version PortNumber ServerProtocolVersion
4 ---- ------- ---------- ---------------------
5 WSUS.kindlelab.int 6.3.9600.18838 8530 1.8
As you can see, you now have an active connection and can begin issuing other cmdlets supplied by the module.
Start by using another cmdlet, Get-PSWSUSServer with the parameter -ShowConfiguration to gather some documentation.
1 Get-PSWSUSConfig
Output:
1 UpdateServer : Microsoft.UpdateServices.Internal.BaseApi.Up...
2 LastConfigChange : 5/26/2019 2:22:27 PM
3 ServerId : 00000000-0000-0000-0000-000000000000
4 SupportedUpdateLanguages : {he, cs, fr, es...}
5 TargetingMode : Server
6 SyncFromMicrosoftUpdate : True
7 IsReplicaServer : False
8 HostBinariesOnMicrosoftUpdate : False
9 UpstreamWsusServerName :
10 UpstreamWsusServerPortNumber : 8530
11 UpstreamWsusServerUseSsl : False
12 UseProxy : False
13 ProxyName :
14 ProxyServerPort : 8080
15 UseSeparateProxyForSsl : False
16 SslProxyName :
17 SslProxyServerPort : 443
18 AnonymousProxyAccess : True
That’s just a small sampling of the available properties. There’s no need to go into further details as that would be out of the scope of this chapter. It’s just a little something to keep in mind while working with any WSUS server using this module.
Building A Solution For Automated Patch Management
At this point, it is strongly suggested that you have a documented plan in place for your patching cycle. If you recall in our example scenario, you have known maintenance windows to work with. This makes building a script a lot easier in the long run.
Here’s a little pseudo code to map out the initial process:
- Connect to WSUS instance.
- Set a few variables for groups, Knowledge Base (KB) list, and date/time.
- Read in KB’s that need deployed / installed.
- Set approval flag and deadline to designated groups.
- Disconnect from WSUS instance.
The first script will be called Deploy-PriSecUpdates.ps1 and will be for the PRI and SEC groups.
Now you can code!
1 # Import required tooling. Should already be there but make sure.
2 Import-Module -Name PoshWSUS
3
4 # Begin connection.
5 # Can use '[IP]' or '[FQDN]' here instead of environment variable.
6 Connect-PSWSUSServer -WsusServer $env:COMPUTERNAME -Port 8530 -Verbose
7
8 # Process
9 # Set the groups
10 $PRI = Get-PSWSUSGroup -Name 'PRI'
11 $SEC = Get-PSWSUSGroup -Name 'SEC'
12
13 # Set the deadlines. 2 hours is a good timeframe.
14 $PRIDeadline = (Get-Date).addHours(2)
15 $SECDeadline = (Get-Date)
16
17 # text file containing KB's
18 $Updates = Get-Content -Path 'C:\PScripts\Maintenance\updates.txt'
19 <#
20 The text file will look like this:
21
22 KB4509000
23 KB4509001
24 KB4509002
25 etc for each KB released that month.
26
27 There's a way to eliminate this step which will be discussed later.
28 #>
29
30 # Profit!
31 # Just as you would select and click on updates to install in the WSUS
32 # management console, this is simply doing the same thing with the
33 # below lines of code:
34
35 Get-PSWSUSUpdate -Update $Updates |
36 Approve-PSWSUSUpdate -Group $PRI -Action Install -Deadline $PRIDeadline
37
38 Get-PSWSUSUpdate -Update $Updates |
39 Approve-PSWSUSUpdate -Group $SEC -Action Install -Deadline $SECDeadline
40
41 # Cleanup. It's best practice to always close connections when they are
42 # no longer needed in a script.
43 Disconnect-PSWSUSServer
44
45 <#
46 Added bonus of having an email alert to let you know the script ran.
47 You'll need an SMTP server in your environment or access to one elsewhere.
48 #>
49
50 #Region Email Alert
51 # building a simple multiple line body here.
52 $Body = @()
53 $Body += 'The following updates were released:'
54 $Body += "$Updates"
55
56 $Body = $Body | out-string
57
58 $EmailSplat = @{
59 From = "Automated Patching With WSUS<Strongbad@homestarrunner.com>"
60 To = "<homestar@homestarrunner.com>"
61 Subject = 'WSUS Release Status - Updates Deployed'
62 Body = "$Body"
63 Priority = 'High'
64 SMTPServer = '1.2.3.4'
65 ErrorAction = 'SilentlyContinue'
66 }
67
68 # Now just take the splatted array values and pass them along to the cmdlet
69 Send-MailMessage @EmailSplat
70
71 #EndRegion
The email splat is completely optional, but it’s nice to have.
If you open up your WSUS management console now, you’ll notice that updates that were previously unapproved are now approved and deadlined.
The next script, Deploy-DevUatUpdates.ps1, will much of the same code.
However, you’re going to change couple of lines.
To save time, make a copy of the first script and rename it.
1 # Import required tooling. Should already be there but make sure.
2 Import-Module -Name PoshWSUS
3
4 # Begin connection.
5 # Can use '[IP]' or '[FQDN]' here instead of environment variable.
6 Connect-PSWSUSServer -WsusServer $env:COMPUTERNAME -Port 8530 -Verbose
7
8 # Process
9 # Set the groups
10 $PRI = Get-PSWSUSGroup -Name 'DEV'
11 $SEC = Get-PSWSUSGroup -Name 'UAT'
12
13 # Set the deadlines. 2 hours is a good timeframe.
14 $DEVDeadline = (Get-Date)
15 $UATDeadline = (Get-Date).addHours(2)
16
17 # text file with the KB's
18 $Updates = Get-Content -Path 'C:\PScripts\Maintenance\updates.txt'
19 <#
20 The text file will look like this:
21
22 KB4509000
23 KB4509001
24 KB4509002
25 etc for each KB released that month.
26
27 There's a way to eliminate this step which will be discussed later.
28 #>
29
30 # Profit!
31 # Just as you would select and click on updates to install in
32 # the WSUS management console, this is simply
33 # doing the same thing with the below lines of code:
34
35 Get-PSWSUSUpdate -Update $Updates |
36 Approve-PSWSUSUpdate -Group $DEV -Action Install -Deadline $DEVDeadline
37
38 Get-PSWSUSUpdate -Update $Updates |
39 Approve-PSWSUSUpdate -Group $UAT -Action Install -Deadline $UATDeadline
40
41 # Cleanup. It's best practice to always close connections when
42 # they are no longer needed in a script.
43 Disconnect-PSWSUSServer
44
45 <#
46 Added bonus of having an email alert to let you know the script ran.
47 You'll need an SMTP server in your environment or access to one elsewhere.
48 #>
49
50 #Region Email Alert
51 # building a simple multiple line body here.
52 $Body = @()
53 $Body += 'The following updates were released:'
54 $Body += "$Updates"
55
56 $Body = $Body | out-string
57
58 $EmailSplat = @{
59 From = "Automated Patching With WSUS<Strongbad@homestarrunner.com>"
60 To = "<homestar@homestarrunner.com>"
61 Subject = 'WSUS Release Status - Updates Deployed'
62 Body = "$Body"
63 Priority = 'High'
64 SMTPServer = '1.2.3.4'
65 ErrorAction = 'SilentlyContinue'
66 }
67
68 # Now just take the splatted array values and pass them along to the cmdlet
69 Send-MailMessage @EmailSplat
70
71 #EndRegion
Now all you have to do is sit back and wait for the reboots to occur! This is great and all, but you still have to manually run these scripts. To fully automate this task, you can use PSScheduledJobs.
Tying It All Together With PSScheduledJobs
Now that you’ve built two fully working scripts that perform all the point and click tasks you once had, it’s time to set a schedule so that you can meet SLA requirements.
For this, you will create a script for a one time task.
1 # Since this is a throwaway script, didn't go crazy with getting credentials
2 $Options = New-ScheduledJobOption -RunElevated -RequireNetwork
3 $Cred = Get-Credential -UserName KindleLab\SB_WSUS
4
5 # Assign the trigger variables
6 $trigger1 = New-JobTrigger -Once -At "7/13/2019 8:00:00 AM"
7 $trigger2 = New-JobTrigger -Once -At "7/28/2019 7:00:00 AM"
8
9 # Build the jobs using splatting
10 $Job1 = @{
11 Name = 'Primary and Secondary Server Maintenance'
12 Trigger = $Trigger2
13 Credential = $Cred
14 FilePath = 'C:\PScripts\Deploy-PriSecUpdates.ps1'
15 ScheduledJobOption = $Options
16 }
17
18 $Job2 = @{
19 Name = 'Development and UAT Server Maintenance'
20 Trigger = $Trigger1
21 Credential = $Cred
22 FilePath = 'C:\PScripts\Deploy-DevUatUpdates.ps1'
23 ScheduledJobOption = $Options
24 }
25
26 # Registering the job finishes the script.
27 Register-ScheduledJob @Job1
28 Register-ScheduledJob @Job2
You only need to run this script once. After the jobs have been successfully registered, you can make alterations to them from within the Task Scheduler management interface just as you would any other scheduled task. The only major difference is the location in which these two jobs are stored. You can find the scheduled jobs under Task Scheduler Library > Microsoft > Windows > PowerShell > ScheduledJobs. Now if all goes according to plan, the scheduled jobs will execute the respective script for each of your environments with no interaction required.
Congratulations! You just automated a WSUS update deployment!
How This Script Fitted Into My Process
The driving force behind the creation of this script stemmed from the fact that I have to wake early and sometimes work late to complete a patching schedule on time. The process involves connecting to a VPN, going through two jump servers, and then working with multiple tools and applications.
There is quite a bit of room for error, and a few have been made.
Luckily, the documented procedures to follow which made the creation of scripts for the applications and servers maintenance plan easy.
These scripts run as scheduled jobs before and after the Deploy-Updates.ps1 script runs.
All I needed to do was to time myself performing the tasks manually.
This gave me a rough estimate of how to set the cadence for execution of the scripts.
Over time, these schedules were fine tuned to happen in a fraction of the time it used to take to manually perform the task.
One environment that used to take 3 hours, now completes a full maintenance cycle in about an hour.
You may find yourself asking, “Why all the separate scripts?”. My reasoning for doing it this way is about separation. Separation allows me to better troubleshoot a failing part of my process. For instance, a shutdown script may run correctly but the deployment script fails. I don’t have to troubleshoot one massive script and can also check the Task Scheduler for timestamps and error codes. You may find yourself not having or wanting that much control.
Next Steps
Now that you can automate update deployments on your own schedule, try something else. Here are some suggestions based on common tasks when working with WSUS:
- Try setting up an automatic cleanup schedule using
Start-PSWSUSCleanup. - Try adding and removing groups in WSUS using
New-PSWSUSGroupandRemove-PSWSUSGroup. - Use
Add-PSWSUSClientToGroupandRemove-PSWSUSClientFromGroupto try adding and removing clients in WSUS.
Summary
The PoshWSUS and PSScheduledJobs modules allow you to take control of WSUS instances using PowerShell. Using this module nearly eliminates the need to open another WSUS console again. This will save you time and further automate what can be an annoying, repetitive yet necessary systems administration task. It’s with great hope that this chapter has inspired you to do more with PowerShell.
Building SQL Servers with Desired State Configuration
Desired State Configuration (DSC) is a powerful tool that enables you to build and configure infrastructure with ease. Although this was first available with Windows Management Framework (WMF) 4.0, the release of WMF 5.1 added many useful enhancements. If you are working with an older server, before Windows Server 2016 (or Windows 10), it’s recommended that you upgrade WMF manually.
DSC has many use cases, and there are varying degrees of complexity and integration that you can strive towards. This chapter demonstrates creating a reasonably simple DSC setup that will install and configure a SQL Server instance on a target node.
Lab setup
The examples in this chapter are executed against a small lab setup on a laptop.
Using Hyper-V to host two Windows Server 2019 virtual machines, one will be used as an authoring station (DscSvr1) while the configurations will be enacted against the second server (DscSvr2).
Infrastructure as Code
To lay some groundwork on why you might want to use DSC, it’s important to talk about Infrastructure as Code (IaC). IaC is the idea that our infrastructure can be defined using code. This is hard to wrap your head around if you still picture servers as physical machines living in your data center. Today the majority of our servers are virtual machines (VMs), so it’s easier to imagine these when talking about IaC.
A simple example of IaC would be a script that defines your VM. The script defines the number of virtual CPUs, the amount of memory allocated, and the disk layout. This document becomes the artifact in this process, and the first step of embracing IaC is to check that into source control. With the description of how to build your server in source control you now have an auditable log of any changes that are implemented with a clear outline on who made what change when.
The source control repository is also the first step in a continuous integration\continuous delivery (CICD) pipeline. Benefits of using a CICD pipeline to build servers include:
- Security - the pipeline service account needs the high level privileges to build servers rather than people.
- Repeatability - when moving through environments an important checkbox won’t be forgotten by a human.
- Built-in documentation - as previously mentioned, any changes are documented which creates an audit log.
With these benefits comes the major risk that the IaC pipeline contains the blueprints of your entire infrastructure. The pipeline should be highly secured to ensure that these blueprints don’t fall into the wrong hands.
Desired State Configuration
DSC is a tool that enables us to implement IaC. It provides the framework needed to define the desired state of your infrastructure. DSC is written using a Domain Specific Language (DSL). It’s PowerShell but with its own domain of terminology and patterns.
DSC is also based on open standards. DSC uses Windows Management Instrumentation (WMI) and Common Information Model (CIM). WMI is an industry standard for managing and configuring your enterprise environment. CIM is used to represent the objects you are configuring. The standard for configuring CIM classes is the Managed Object Format (MOF). These three standards means that DSC can integrate with other configuration management implementations.
Stages of Desired State Configuration
There are four main stages to consider when talking about DSC. These stages are used as a guide to walk through the entire process of building a SQL Server.
Author
The first stage in this process is to write our configuration. This will define the desired state of our target node or nodes. This configuration document should contain the complete definition of your target node, and as mentioned earlier with IaC, should be source controlled.
Declarative Syntax
PowerShell is usually written using imperative language. This code steps through the actions that should be taken to make the desired changes. For example, the snippet below will create a folder:
1 New-Item –Path 'C:\temp' –ItemType Directory
Unlike the above code to create a folder, DSC Configurations are written using a declarative syntax.
This describes the desired state of our target node.
The following describes that a directory should be present at the location C:\temp, but it doesn’t explain how to complete the task.
1 File CreateDataDir {
2 DestinationPath = 'C:\temp'
3 Ensure = 'Present'
4 Type = 'Directory'
5 }
Using the above declarative syntax removes the need for including any error handling. With the imperative code after the first execution any subsequent runs will be met with a sea of red since the folder already exists. On the other hand, when using declarative language if the folder exists, the desired state is met and no further action is required.
Idempotent
Another concept to understand when working with DSC is idempotency. Being idempotent means that the same configuration can be applied multiple times with the same end result. If the node is already in the desired state, the configuration will note that and carry on. If it’s not in the desired state, code will be executed to “make it so.” This means that incremental changes can be made to our configuration. When the configuration is reapplied, only the changes will be executed.
Resources
The main building block used to write a configuration document are resources.
These come packaged as PowerShell modules and contain the code needed to get your target node into the defined desired state.
If your machine has WMF installed, you will also have the PSDesiredStateConfiguration module which contains 22 resources already built-in.
PowerShell likes to be helpful.
Armed with Get-Command and Get-Help you can find functions and cmdlets for whatever you need.
DSC is no different.
It comes with Get-DscResource, which can be used to find resources on your local machine, and Find-DscResource to search the PowerShell Gallery.
First, use Get-DscResource to investigate the built-in module.
Running the following will list out all the resources included:
1 Get-DscResource -Module PSDesiredStateConfiguration
| ImplementedAs | Name | ModuleName | Version | Properties |
|---|---|---|---|---|
| Binary | File | {DestinationPath, Attributes, Ch… | ||
| PowerShell | Archive | PSDesiredStateConfiguration | 1.1 | {Destination, Path, Checksum, Cr… |
| PowerShell | Environment | PSDesiredStateConfiguration | 1.1 | {Name, DependsOn, Ensure, Path…} |
When using the -Name parameter to get a single resource, the -Syntax parameter can also be used as shown below.
This gets the syntax needed to use the resource within your configuration.
You can copy and paste the output returned and begin filling in the properties you need to define.
1 Get-DscResource -Name File -Syntax
1 File [String] #ResourceName
2 {
3 DestinationPath = [string]
4 [Attributes = [string[]]{ Archive | Hidden | ReadOnly | System }]
5 [Checksum = [string]{ CreatedDate | ModifiedDate | ...}]
6 [Contents = [string]]
7 [Credential = [PSCredential]]
8 [DependsOn = [string[]]]
9 [Ensure = [string]{ Absent | Present }]
10 [Force = [bool]]
11 [MatchSource = [bool]]
12 [PsDscRunAsCredential = [PSCredential]]
13 [Recurse = [bool]]
14 [SourcePath = [string]]
15 [Type = [string]{ Directory | File }]
16 }
The built-in resources aren’t the only ones available.
There are many more modules full of resources on the PowerShell Gallery.
To search for those, use Find-DscResource.
In the example below a specific resource name is passed in, but wildcards can also be used.
1 Find-DscResource -Name SqlSetup
| Name | Version | ModuleName | Repository |
|---|---|---|---|
| SqlSetup | 13.0.0.0 | SqlServerDsc | PSGallery |
On June 20th 2019, there were 1,506 resources available in the PowerShell Gallery.
They can be counted using (Find-DscResource).Count.
Writing the Configuration
Once the required resources have been identified, it’s time to create a configuration document.
The first line is similar to creating a function in PowerShell.
Since DSC uses a DSL, the rest of the code looks different.
The configuration has been named CreateSqlFolder.
Within the configuration, the Import-DscResource keyword will be used to pull in the modules that contain the resources needed to configure the target node.
Then the Node block is defined. There can be one or more node blocks per configuration. Each node block can contain more than one target node. Below, two target node names have been passed in using array notation. Nested within the node blocks are where the resources are defined. Two File resources are added which create the directories needed for the data and log files once SQL Server is installed. Each resource is followed by a name. This friendly name must be unique within the configuration. It’s a good idea to use meaningful names for these resources as they will be shown in the output. Having names that describe what they do will help when troubleshooting.
1 Configuration CreateSqlFolder {
2
3 Import-DscResource -ModuleName PSDesiredStateConfiguration
4
5 Node @('dscsvr1', 'dscsvr2') {
6 File CreateDataDir {
7 DestinationPath = 'C:\SQL2017\SQLData\'
8 Ensure = 'Present'
9 Type = 'Directory'
10 }
11 File CreateLogDir {
12 DestinationPath = 'C:\SQL2017\SQLLogs\'
13 Ensure = 'Present'
14 Type = 'Directory'
15 }
16 }
17 }
Once the above code is executed, you can see a command has been created with the special type of configuration.
1 Get-Command -CommandType Configuration
| CommandType | Name |
|---|---|
| Configuration | CreateSqlFolder |
Managed Object Format Files
Once our configuration has been written, it must be compiled into a MOF file. This occurs by executing the configuration:
1 CreateSqlFolder -Output .\Output\
When you execute this, the defined output folder will now contain two MOF files, one per node.
Configuration Data
The configuration written in the last example was a simple example and not realistic for a real world situation. The next step to enhance the configuration is using Configuration Data to separate the “data” from the “code.” The Configuration Data is written as a hash table, either within the same file as the configuration or as an external psd1 file. If the following is saved as a psd1 file. The filename can then be passed into the configuration.
1 @{
2 AllNodes = @(
3 @{
4 NodeName = "DSCSVR1"
5 Environment = "Test"
6 },
7 @{
8 NodeName = "DSCSVR2"
9 Environment = "Production"
10 }
11 )
12 NonNodeData = @{
13 DataDir = "C:\SQL2017\SQLData\"
14 LogDir = "C:\SQL2017\SQLLogs\"
15 TestDir = "C:\TestForJess"
16 }
17 }
The hash table must contain an ‘AllNodes’ key. It can also include other ‘NonNodeData’ key if desired. The above hash table defines two target nodes: one test and one production. It then defines three directories within the ‘NonNodeData’ that can also be accessed from within the configuration.
To use the Configuration Data within the Configuration, it will need to be passed in using the Common Parameter of -ConfigurationData.
The $AllNodes special variable is used to access the defined nodes.
The NonNodeData is accessed using the $ConfigurationData variable.
1 Configuration CreateSqlFolder {
2
3 Import-DscResource -ModuleName PSDesiredStateConfiguration
4
5 Node $AllNodes.NodeName {
6 File CreateDataDir {
7 DestinationPath = $ConfigurationData.NonNodeData.DataDir
8 Ensure = 'Present'
9 Type = 'Directory'
10 }
11 File CreateLogDir {
12 DestinationPath = $ConfigurationData.NonNodeData.LogDir
13 Ensure = 'Present'
14 Type = 'Directory'
15 }
16 }
17
18 Node $AllNodes.Where{$_.Environment -eq "Test"}.NodeName {
19 File CreateTestDir {
20 DestinationPath = $ConfigurationData.NonNodeData.TestDir
21 Ensure = 'Present'
22 Type = 'Directory'
23 }
24 }
25 }
26
27 CreateSqlFolder -Output .\Output\ -ConfigurationData .\03a_ConfigurationData.psd1
There is a mix of DSC and regular PowerShell in the second node block.
The Where method is used to apply this block to only certain nodes.
An extra test directory will be created on servers that have the environment defined as test in the configuration data.
This provides flexibility.
The same configuration can be used for all environments while still allowing some differences.
Publish
In the publish phase of DSC, the MOF files that were just created are shipped out to the target nodes. There are two modes that DSC can be used in: push and pull Push mode is simpler to setup and therefore will be used in these examples. Push mode involves actively pushing configurations to the target node. In pull mode, the nodes are registered to a pull server that contains the modules and configurations needed.
There are three options for how to setup a pull server: using the pull service on a Windows Server, setting up an SMB share, or using the Azure Automation platform. Once a pull server is set up you then register your target nodes with it. The nodes will check in to determine if there’s a configuration that should be applied. The frequency of checking in is based on a configurable setting.
There are two commands that can be used to push out MOF files to the target node.
First, Publish-DscConfiguration can be used to deliver the MOF to the node but not immediately enact it.
Instead after a certain amount of time (configurable setting) has passed the configuration will be applied.
1 Publish-DscConfiguration -Path .\output\ -ComputerName dscsvr2 -Verbose
The other option is to use Start-DscConfiguration.
This will push the MOF file and immediately enact to reach the desired state.
1 Start-DscConfiguration -Path .\output\ -ComputerName dscsvr2 -Wait -Verbose
In this example, use the -Wait and -Verbose parameters to be able to see the output returned to your console.
If you don’t specify these parameters, the console will return immediately and the execution will take place in the background within a PowerShell job.
Enact
Once the MOF file gets to the target node, the Local Configuration Manager (LCM) takes over. This is the engine of DSC. It’s job is parsing and enacting the MOF. There are many settings that can be used to configure the LCM. Several of the settings available are described below. For a full list of settings, reference the Microsoft Docs.
| Setting | Description |
|---|---|
| ActionAfterReboot | What should happen after a reboot. ContinueConfiguration or StopConfiguration. |
| CertificateID | Thumbprint of the certificate used to encrypt the MOF file. |
| ConfigurationMode | What the LCM does with the configuration document. This setting can be used to automatically keep your node in the desired state. ApplyOnly, ApplyAndMonitor or ApplyAndAutoCorrect. |
| ConfigurationModeFrequencyMins | How often should the LCM check configurations and apply them. If the ConfigurationMode is ApplyOnly this is ignored. |
| RebootNodeIfNeeded | If during the configuration a reboot is required should the node automatically reboot. |
| RefreshMode | Does the LCM passively wait for configurations to be pushed to it (push), or actively check in with the pull server for new configurations (pull). |
These settings can be changed by enacting a meta configuration.
First, check the current settings using Get-DscLocalConfigurationManager:
1 Get-DscLocalConfigurationManager -CimSession dscsvr2 |
2 Select-Object ActionAfterReboot, RefreshMode, ConfigurationModeFrequencyMins
| ActionAfterReboot | RefreshMode | ConfigurationModeFrequencyMins |
|---|---|---|
| ContinueConfiguration | Push | 15 |
The ConfigurationModeFrequencyMins is how often the LCM will check for configurations to apply you can use the following configuration to define the desired settings. To change it:
1 [DSCLocalConfigurationManager()]
2 configuration LCMConfig
3 {
4 Node dscsvr2
5 {
6 Settings
7 {
8 ActionAfterReboot = 'ContinueConfiguration'
9 RefreshMode = 'Push'
10 ConfigurationModeFrequencyMins = 20
11 }
12 }
13 }
14
15 LCMConfig -Output .\output\
The final line of the above snippet will create a meta MOF file for the target node.
Once that has been generated, it can be applied by using Set-DscLocalConfigurationManager:
1 Set-DscLocalConfigurationManager -Path .\output\ -ComputerName dscsvr2 -Verbose
Monitor
The final step in the process is to review the configuration and report on any configuration drift. This is when nodes are no longer in the desired state that was defined.
This is the part of DSC that’s lacking slightly. Hopefully, as DSC continues to be developed, there will be more work around the reporting aspect. Currently, the options for more complete reporting are to pair DSC up with a 3rd party tool, or to write your own tooling around your DSC implementation.
There is some information available though. It makes sense if you are following along to come back to this section after the last section of the chapter where you will install a SQL Server. The output below is from post SQL Server install using DSC.
First, check the current configuration of the node using Get-DscConfiguration:
1 Get-DscConfiguration -CimSession dscsvr2
1 ...
2 ConfigurationName : InstallSqlServer
3 DependsOn : {[SqlSetup]InstallSql}
4 ModuleName : SqlServerDsc
5 ModuleVersion : 12.3.0.0
6 PsDscRunAsCredential :
7 ResourceId : [SqlDatabase]CreateDbaDatabase
8 SourceInfo :
9 Collation : SQL_Latin1_General_CP1_CI_AS
10 Ensure : Present
11 InstanceName : MSSQLSERVER
12 Name : DBA
13 ServerName : DSCSVR2
14 PSComputerName : dscsvr2
15 CimClassName : MSFT_SqlDatabase
16 ...
It’s important to note that this isn’t stating the desired state, just the current configuration. For example, if the DBA database is dropped and then the above code rerun, you will get the following:
1 ...
2 ConfigurationName : InstallSqlServer
3 DependsOn : {[SqlSetup]InstallSql}
4 ModuleName : SqlServerDsc
5 ModuleVersion : 12.3.0.0
6 PsDscRunAsCredential :
7 ResourceId : [SqlDatabase]CreateDbaDatabase
8 SourceInfo :
9 Collation : SQL_Latin1_General_CP1_CI_AS
10 Ensure : Absent
11 InstanceName : MSSQLSERVER
12 Name : DBA
13 ServerName : DSCSVR2
14 PSComputerName : dscsvr2
15 CimClassName : MSFT_SqlDatabase
16 ...
It now shows that the the Ensure property is Absent, but it doesn’t note that this isn’t the desired state.
The next command available is Get-DscConfigurationStatus.
This will return detailed information on completed configuration runs.
1 Get-DscConfigurationStatus -CimSession DscSvr2
| Status | StartDate | Type | Mode | RebootRequested | NumberOf Resources | PSComputerName |
|---|---|---|---|---|---|---|
| Success | 6/23/2019 7:45:40 AM | Initial | Push | False | 11 | DscSvr2 |
To determine if our node is still in the desired state or not, you can use Test-DscConfiguration.
If you run this with just the -CimSession parameter it will just return true or false.
To get more information you can use the -Verbose switch, which outputs all the DSC verbose output.
This is more useful than the first option, but means you will have to read through all the output to find which resources aren’t in the desired state.
Our final option is to use the -Detailed parameter.
This returns a PowerShell object that can be manipulated to find the information needed.
For example, the code below just shows resources that aren’t in the desired state are selected:
1 Test-DscConfiguration -CimSession DscSvr2 -Detailed | Select-Object ResourcesNotInDe\
2 siredState
| ResourcesNotInDesiredState |
|---|
| {[SqlDatabase]CreateDbaDatabase} |
Troubleshooting
Although separate from monitoring, another important concept is being able to troubleshoot your configuration if something fails or the expected results don’t occur.
Along with the commands highlighted above in the monitoring section the Windows event logs can provide a lot of details on what might have gone wrong.
These logs can be found on the target node within the event viewer at the following path Application and Services Logs > Microsoft > Windows > Desired State Configuration.
There are two logs, “Operational” and “Admin,” that are turned on by default.
If these don’t provide enough detail you can enable two additional logs, “Analytic” and “Debug.”
Desired State Configuration and SQL Server
With a whistle-stop tour of the DSC architecture under your belt, it can now be applied to a real world problem. If you install SQL Server often, you have probably developed a checklist similar to the one below to make sure each server is built to the same standard.
- Install Windows Features - .NET Framework
- Create directories for Install/Data/Logs/Tempdb
- Install SQL Server
- Enable TCP/IP
- Set Windows Firewall
- Server Configuration Options (sp_configure)
- Backup compression
- CTOP
- MAXDOP
- Create DBA Database
Each time a request comes in to build a server, this list can be followed and you’ll end up with a server that meets your needs. However, it can be a tedious process to manually follow this list. Instead, let’s translate this to use DSC.
For each step on my list there is a DSC resource available that will be used instead.
| Step | Module | DSC Resource |
|---|---|---|
| Install Windows Features | PSDesiredStateConfiguration | WindowsFeature |
| Create directories | PSDesiredStateConfiguration | File |
| Install SQL Server | SqlServerDsc | SqlSetup |
| Enable TCP/IP | SqlServerDsc | SqlServerNetwork |
| Set Windows Firewall | SqlServerDscNetworkingDsc | SqlWindowsFirewall |
| NetworkingDsc | Firewall | |
| Server Configuration | SqlServerDsc | SqlServerConfiguration |
| Create DBA Database | SqlServerDsc | SqlDatabase |
You can see that there are some options available.
For example, the task of setting up the windows firewall can be completed one of two ways.
Either the SqlWindowsFirewall resource from within the SqlServerDsc module can be used to open up the firewall for the features installed,
or, if more flexibility is needed, the Firewall resource from the NetworkingDsc module can be used.
This resource has more properties that can be set than those that are made available with the SqlWindowsFirewall resource.
Once the resources have been chosen, they will form the basis of the configuration. A full version of both the configuration document and some sample configuration data are available in the downloads for this book.
Using the lab setup described earlier, SQL Server was installed and configured as defined in the above checklist in less than 5 minutes. That’s pretty impressive!
There are a few things to note within the sample scripts.
First, note the NonNodeData within the configuration data.
You can structure your configuration data in more than one way to define the properties that will be used in the configuration.
For the directory structures, each folder is listed directly under the NonNodeData key.
However, the SQL Server configuration settings that will be used by the SqlServerConfiguration resource are provided in a nested array called ConfigOptions.
1 NonNodeData = @{
2 DataDir = "C:\SQL2017\SQLData\"
3 LogDir = "C:\SQL2017\SQLLogs\"
4 InstallDir = "C:\SQL2017\Install\"
5 InstanceDir = "C:\SQL2017\Instance\"
6 ConfigOptions = @(
7 @{
8 Name = "backup compression default"
9 Setting = 1
10 },
11 @{
12 Name = "cost threshold for parallelism"
13 Setting = 25
14 },
15 @{
16 Name = "max degree of parallelism"
17 Setting = 4
18 }
19 )
20 }
The difference in how the configuration data is structured means that it will be used differently when these properties are accessed within the configuration. This is a design point that you will want to think about as you build out your own solutions. For the folder structures, the resources will be built out as already highlighted, accessing the folder paths as so:
1 File CreateInstallDir {
2 DestinationPath = $ConfigurationData.NonNodeData.InstallDir
3 Ensure = 'Present'
4 Type = 'Directory'
5 }
6 File CreateInstanceDir {
7 DestinationPath = $ConfigurationData.NonNodeData.InstanceDir
8 Ensure = 'Present'
9 Type = 'Directory'
10 }
Since our configuration options are structured as an array, PowerShell can be used to generate the individual resources by looping through each item.
This will create a resource for each setting in the ConfigOptions array.
As an example, the resource to set backup compression will be named “SetConfigOption_backup compression default”.
1 $ConfigurationData.NonNodeData.ConfigOptions.foreach{
2 SqlServerConfiguration ("SetConfigOption_{0}" -f $_.name) {
3 DependsOn = '[SqlSetup]InstallSql'
4 ServerName = $Node.NodeName
5 InstanceName = 'MSSQLSERVER'
6 OptionName = $_.Name
7 OptionValue = $_.Setting
8 }
9 }
The Get-Credential function is used to set the “sa” user password.
When this configuration is executed, a popup will appear to prompt for the password to be entered.
1 $saCred = (Get-Credential -Credential sa)
For this example, the following section has been added to the AllNodes key of the configuration data.
Specifying the NodeName as * means that these settings will be applied to all nodes.
In this case PSDscAllowPlainTextPassword has been set to allow plain-text passwords to be stored in the MOF file.
This isn’t recommended for production use!
1 @{
2 NodeName = '*'
3 PSDscAllowPlainTextPassword = $true
4 }
After you run this, if you inspect the MOF files that were generated, you can see the super secure sa password. For production, you should be encrypting your MOF file with a certificate. This is outside the scope of this chapter. More information on how to set this up can be found in the Microsoft Docs.
1 instance of MSFT_Credential as $MSFT_Credential1ref
2 {
3 Password = "Password1234";
4 UserName = "sa";
5
6 };
Another useful property is DependsOn.
This is a common property that’s available on all resources.
In the resource definition below, this property ensures that the configuration won’t attempt to create the DBA database if the InstallSql resource isn’t in the desired state.
This is vital if you’re using WMF 4.0, as DSC didn’t follow the order laid out in the configuration.
It, instead, enacted resources in any old order.
It’s still important with WMF 5.1.
If the InstallSql resource fails for any reason, this will stop the resource to create a database from being enacted.
1 SqlDatabase CreateDbaDatabase {
2 DependsOn = '[SqlSetup]InstallSql'
3 ServerName = $Node.NodeName
4 InstanceName = 'MSSQLSERVER'
5 Name = 'DBA'
6 }
Running the configuration, the -wait and -verbose switches are set for Start-DscConfiguration so that as the configuration runs the output is returned to the console.
Below is a snippet of this output showing the LCM enacting the CreateDbaDatabase resource.
You can see it starts by running a test to see if the node is in the desired state.
In this case it returns false so the set is called to ‘make it so’.
1 VERBOSE: [DSCSVR2]: LCM: [ Start Resource ] [[SqlDatabase]CreateDbaDatabase]
2 VERBOSE: [DSCSVR2]: LCM: [ Start Test ] [[SqlDatabase]CreateDbaDatabase]
3 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
4 Checking if database named DBA is present or absent
5 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
6 Information: PowerShell module SqlServer not found, trying to use older SQLPS modu\
7 le.
8 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
9 Importing PowerShell module 'SQLPS' with version '14.0' from path 'C:\Program Files
10 (x86)\Microsoft SQL Server\140\Tools\PowerShell\Modules\SQLPS\SQLPS.psd1'.
11 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
12 Connected to SQL instance 'DSCSVR2'.
13 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
14 Getting SQL Databases
15 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
16 SQL Database name DBA is absent
17 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
18 2019-06-26_03-33-51: Ensure is set to Present. The database DBA should be created
19 VERBOSE: [DSCSVR2]: LCM: [ End Test ] [[SqlDatabase]CreateDbaDatabase]
20 in 0.1870 seconds.
21 VERBOSE: [DSCSVR2]: LCM: [ Start Set ] [[SqlDatabase]CreateDbaDatabase]
22 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
23 Found PowerShell module SQLPS already imported in the session.
24 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
25 Connected to SQL instance 'DSCSVR2'.
26 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
27 Adding to SQL the database DBA.
28 VERBOSE: [DSCSVR2]: [[SqlDatabase]CreateDbaDatabase]
29 2019-06-26_03-33-51: Created Database DBA.
30 VERBOSE: [DSCSVR2]: LCM: [ End Set ] [[SqlDatabase]CreateDbaDatabase]
31 in 0.7350 seconds.
32 VERBOSE: [DSCSVR2]: LCM: [ End Resource ] [[SqlDatabase]CreateDbaDatabase]
33 VERBOSE: [DSCSVR2]: LCM: [ End Set ]
34 VERBOSE: [DSCSVR2]: LCM: [ End Set ] in 227.8890 seconds.
35 VERBOSE: Operation 'Invoke CimMethod' complete.
36 VERBOSE: Time taken for configuration job to complete is 228.496 seconds
The last line shows it took 228 seconds, or just under 4 minutes to configure the target node.
As mentioned previously, since DSC is idempotent you can make incremental changes to this configuration and reapply it.
If you needed to create another database, you could add another resource to your configuration and rerun the script.
The LCM will go through the MOF file testing each resource to determine if it’s already in the desired state.
If it’s not, it will call the set function for that specific resource.
The same goes for if you decide to change one of the configuration options.
If, for example, you wanted to change the cost threshold for parallelism to 50, you would update the configuration data and rerun the configuration.
There are many ways to expand on this example, but this should give you an idea on how you can build a SQL Server using PowerShell DSC with ease.
Next Steps
If this chapter has interested you and you’d like to know how to expand on this idea, there are a few recommended topics. Azure Automation has capabilities to work as a pull server for your cloud or on-prem machines. This can help with the management of configurations and modules.
The use of configuration data is also an area that can be expanded on. As you build out a full list of settings and options for different environments, or even types of servers, you will find that this document can get large and overwhelming. Gael Colas has written a PowerShell module called Datum that enables you to build out a hierarchy to manage this problem.
Finally, although not fully built out, there is the concept of ReverseDsc, This is a collection of PowerShell modules that you would point at an already configured node. The resulting output is the configuration that would have been needed to put that node in its current state. This could be a good method of creating the artifacts for IaC for existing servers.