Category Archives: Tutorials

In honor of National Novel Writing Month (NaNoWriMo), I wanted to offer a smaller, and more unique, challenge.

Send me a PowerShell article.

Seriously. My name is Don Jones, and this is PowerShell.org, so you can probably figure out how to contact me. Send me an article between 800 and 3,000 words (including code) in Microsoft Word format. Don’t attach any scripts. Please keep the formatting super-simple: paste code from the PowerShell ISE, and use Word’s default styles otherwise. If you must include screen shots, please embed them in the doc, but also include them as a a separate PNG in your e-mail.

You can write about anything, provided it’s PowerShell-related.What’s best? Some challenge that stumped you – and that you eventually solved (and please, tell us how). Something that you think folks could benefit from, or could learn to do better. Even an article that lays out both sides of a particular question, and outlines the pros and cons of each argument. Doesn’t matter. What matters is that you write.

I will personally commit to reading every single one, and providing you with feedback on your article. When suitable, I’ll make some specific suggestions for improving the article. If you then fix it up accordingly, I’ll run it by a professional editor – and I’ll have it published. In some cases, we’ll publish it right here on PowerShell.org. In other cases, I’ll submit it to my friends at 1105 Media for their consideration in one of their IT magazines, like Redmond Magazine or MCPMag.com. Still others will go into the PowerShell.org TechLetter, which would be a huge help to our editors, who are always hungry for content.

Being able to communicate well is important in all walks of life, but being willing to share is even more important. Think you’ve got nothing to share? Wrong. You have unique experiences that everyone can learn from. You do not need to be an expert in order to have something valuable to share. We would all benefit a lot more if more people shared their experiences and successes – so now it’s your turn.

The deadline is November 30th, of course, and I’ll work my way through them all as quickly as possible. You’re not going to be judged on your grammar or spelling (although do use Word’s tools to help those as much as it can). Don’t try to write fancy, or overly formal. In fact, just write like you’d talk. Read your piece back to yourself aloud, and if it sounds weird, fix it so it doesn’t. If it sounds good, it’ll read well.

C’mon. Take up the challenge. And tweet folks over to this article, too. Let’s make it a thing. My goal is to help at least a few folks because regular bloggers, either here or elsewhere, and my dream is to find maybe a couple of folks who can pick up a full-time column with a magazine or other publication. That’d be awesome. I know you’re out there – let’s get the party started.

Now that we’ve suitably rested, let’s get back to working with Desired State Configuration. Now, there are some basic features to work with that ship by default and the PowerShell team has been blogging some additional resources, but in order to do some really interesting thing with DSC, we’ll need to create our own resources.

The DSC Resource Structure

DSC resources are (at their most basic) a PowerShell module. These modules are augmented by a schema.mof file (we’ll get into that more in a minute or two). These modules expose three main functions, Get-TargetResource, Set-TargetResource, and Test-TargetResource. All three functions should share the same set of parameters.

Test-TargetResource

Test-TargetResource validates whether your resource is currently in the desired state based on the parameters provided. This function returns a boolean, $true if the resource is in the state described or $false if not.

Set-TargetResource

Set-TargetResource is the workhorse in this module. This is what will get things into the correct state. The convention is to support one parameter called Ensure that can take two values, “Present” or “Absent” to describe whether or not a resource should be applied or removed as described.

(Here’s a little trick.. if you write break your Test-TargetResource into discrete functions, you can use those functions to only run the portions of Set-TargetResource that you need to!)

Get-TargetResource

This is currently the least useful of the commands, but if experience has taught me anything, it’ll likely have an a growing use case over time.

Get-TargetResource returns the current state of the of the resource, returning a hash table of properties matching the parameters supplied to the command.

Exporting Commands

This module should explicitly export these commands via either Export-ModuleMember or a module manifest. If you don’t, Import-DscResource will have trouble loading the resources when you try to generate a configuration (it’s not a problem for running a configuration, just the generation part).

The Managed Object Framework (MOF) Schema

The last piece of the DSC Resource is a schema file that maps the parameters for the command to a CIM class that can be registered in WMI. This allows us to serialize the configuration parameters to a standards-based format and allows the Local Configuration Manager to marshal the parameters back to call the PowerShell functions for the phase that the LCM is in. This file is named modulename.schema.mof.

There is no real reason to write a schema.mof file by hand, both the DSC Resource Designer and my New-MofFile function can help generate that function. The one key thing to be aware of in the schema.mof is that there is an attribute at the top of each of the MOF classes that denotes a friendly name, which is the identifier you will use in a configuration to specify a resource.

The reason we need a module metadata file for the base module, is when resources from that module are used in a configuration, the generated configuration MOF files will reference the version of the base module (and that specific version is required on the node where the resource will be applied).

Next up, we’ll talk about how we package our resources to be distributed by a pull server.

The Local Configuration Manager offers a number of options, which we’ll examine.

AllowModuleOverwrite

This one is pretty straight-forward and only impacts configurations where you are using a pull server. Â If you allow module overwrite, newer versions of modules can replace existing modules. Â If you don’t enable this, you’ll have to manually remove modules if you want a new copy to pull down.

CertificateID

CertficateID is a thumbprint of a certificate in the machine certificate store that will be used to decrypt any secrets present in the configuration. Â DSC allows PSCredential objects to be marshaled through a MOF file, but requires them (without explicit authorization) to be encrypted. (There is another option as well, if you use the ConfigurationData feature, you can also supply the path to a certificate file to use – I’ll be blogging that scenario later when I cover some more advanced scenarios.)

ConfigurationID

The ConfigurationID is a GUID which uniquely identifies what configuration a node should retrieve from a pull server. Â If you haven’t had to generate GUIDs before, a really easy way to do so is:

PS> [guid]::NewGuid().Guid

ConfigurationMode

ConfigurationMode defines how the DSC client operates. Â There are three valid values:

Apply

ApplyAndMonitor

ApplyAndAutoCorrect

(NOTE: Â These descriptions of functionality are based on limited testing – the TechNet documentation is not up to date yet, but should be in the near future.)

Apply will apply the configuration once and after a successful run is logged, it will stop attempting to apply configuration or checking the configuration. Â ApplyAndMonitor will apply a configuration as in Apply, but will continue to validate that a node is configured as described. Â No corrective action will take place if there is configuration drift. Â Finally, ApplyAndAutoCorrect is what most of us think of when looking at DSC as a configuration management tool. Â This setting applies a configuration and checks it regularly. Â If configuration drift is detected, the configuration manager will attempt to return the machine to the desired state (see how I worked the product name in there..).

ConfigurationModeFrequencyMins

This setting determines how frequently the configured method (the RefreshMode) will be run. Â In the case of a pull server, this is how frequently the pull server will be checked for updated configurations. Â The minimum value for this is 30. Â This value needs to be a multiple of the RefreshFrequencyMins. Â If it is not, the engine will treat it as if it was a multiple (rounded up).

Credential

The Credential supplied can be used for accessing remote resources.

DownloadManagerCustomData

DownloadManagerCustomData is a hashtable of values that is passed to the specified download manager. Â In the case of a a pull server, the two possible keys are ServerUrl and AllowUnsecureConnection.

DownloadManagerName

Here is where we specify which download manager to use. Â DSC ships with two options, the WebDownloadManager (for the web-based pull server) and the DSCFileDownloadManager (for using an SMB share).

RebootNodeIfNeeded

Here’s another pretty self-explanatory setting. Â DSC offers a method for resources to request a reboot. Â If this setting is $true, then DSC will reboot the node when it is requested. Â If it is set to $false, DSC will notify (via the verbose stream and the DSC log) that a reboot is required, but not actually reboot the node.

RefreshFrequencyMins

The RefreshFrequencyMins setting determines how often DSC runs an integrity check against the cached configuration value (or if the check falls on the ConfigurationModeFrequencyMins interval against the pull server if one is configured). Â The minimum value for this setting is 15 minutes.

RefreshMode

RefreshMode is either PUSH or PULL. Â If you set the RefreshMode to PULL, you’ll need to configure a download manager (via DownloadManagerName).

Picking Back UP

Now that we have some of the basics down, we can start to look deeper at how composable these configurations are. A DSC configuration defined in PowerShell offers several advantages, not the least of which is that a configuration can be parameterized.

Parameterization

With this simple tweak, I’ve taken a configuration that was hard-coded to one server name to one that can take an array of server names. The PowerShell savvy are probably going, “Big deal.. functions could do that since Monad”. If you remember back in the last post, I showed how ConfigurationData could be used to pass data into a configuration. Then my main configuration did some stuff based on metadata about the node. My configuration was starting to look a bit complicated. The ability to parameterize configurations really helps us when we are ready for the next step, nesting configurations.

So, what did we just see? I defined a parameterized configuration and then used it like a DSC Resource in my main configuration. Parameters are passed to the nested configuration in the exact same way as to a DSC Resource. This syntax also means that we can use DependsOn to create dependency chains between groups of functionality more easily.

We can leverage this technique of creating nested configurations to simplify our configuration scripts, minimize dependency chains, and provide an easy way to reuse configuration sections for multiple configurations, all using the same semantics of any DSC resource.

Applying Configurations

Once we have our configurations generated, we have a couple of ways to distribute and apply the configurations. We’ll start assuming that we have generated our configurations for the servers we would like to target.

Start-DscConfiguration

Our first option is Start-DscConfiguration. We can point Start-DscConfiguration to the configuration files that we’ve generated (just point to the directory with the configuration files in them).

Start-DscConfiguration -Path ./MyFirstServerConfig

Doing this will attempt to run the configurations generated against any nodes specified. You can target specific servers by using the -computername or -cimsession parameters.

One downside to using Start-DscConfiguration is that any custom resources (not nested configurations) need to be present on the remote node BEFORE applying the configuration.

You CANNOT create a configuration that uses the file resource (or any other resource) to create the resource on disk during the DSC run. While this would be a cool trick, the resources contain a schema.mof file that defines the interface that DSC can use and the DSC engine will error if it cannot find the resource interface when the configuration is validated before it applies. One option is having two-phased configurations, one to distribute resources and the second to apply it.

Pulling a Configuration

The next alternative is to distribute configurations and resources using a pull Server. In box, DSC supports two types of pull server, an REST based pull server (like described in my previous post) and an SMB based pull server (described here). The pull server requires nodes to be labeled with a GUID (the configuration ID, which we’ll talk about in an upcoming post), instead of server name. The pull server also requires that each config be accompanied by a checksum file with the file hash of the configuration file (exampleÂ 72ed4117-fc49-4f81-822c-5bc59db64dd3.mof andÂ 72ed4117-fc49-4f81-822c-5bc59db64dd3.mof.checksum). Â One word off caution.. there can be no extra whitespace after the hash in the checksum file or the hash check will fail on the client node. Â This means you cannot use

I started with an overview of what and why. Â Today, I’m going to start the how.

Building a Pull Server

I’m going to describe how to do this with Server 2012 R2 RTM (NOTE: this is not the General Availability Â release, so there may be changes at GA), since that’s the environment I’m working most in. Â If there is enough demand, I may follow up with how to do this using the Windows Management Framework on downlevel operating systems after the GA version of WMF 4 is released.

The first step is adding the required roles and features, including the DSC Service.

Add-WindowsFeature Dsc-Service

Fortunately, the Dsc-Service feature has the right dependencies configured so IIS, the correct modules, and the Management OData Extension are all enabled.

Next we need to set up the IIS web site:

Create an directory to serve the web application from (I’ll use c:\inetpub\wwwroot\PSDSCPullServer)

Copy several files from $pshome/modules/psdesiredstateconfiguration/pullserver (Global.asax, PSDSCPullServer.mof, PSDSCPullServer.svc, PSDSCPullServer.xml) to this directory.

Copy PSDSCPullServer.config and rename it to web.config

Create a subdirectory named “bin”.

Copy one file from $pshome/modules/psdesiredstateconfiguration/pullserver (Microsoft.Powershell.DesiredStateConfiguration.Service.dll) to the “bin” directory.

In IIS, create an application pool that runs under the “Local System” account.

In, IIS, create a new site (or application in an existing site or just use the existing default site)

Point the site or application root to the directory you designated as the root of the site.

Now we need to set up the location where the pull server content will be served from. Â Installing the DSC Service feature creates a default location ( $env:programfiles\WindowsPowerShell\DscService ). Â There’ll you find sub-directories for configuration and modules. Â We can use these folders or we can create another location. Â I’m going to stick with the defaults for now. Â We’ve got a few steps left.

First, we need to copy the Devices.mdb fromÂ $pshome/modules/psdesiredstateconfiguration/pullserver to the root of our pull server data location (in this case,Â $env:programfiles\WindowsPowerShell\DscService )

I’m starting today with the general overview of what I’m trying to accomplish and why I’m trying to accomplish this. The what and why are critical in determining the how

The Overview

Goal:

All systems have basic and general purpose roles configured and monitored for drift via Desired State Configuration.

Reason:

System configuration is the one of the silent killers for sysadmin (yes, I prefer sysadmin to IT Pro – deal with it). In the case where deployments are not automated, each system is unique, a snowflake that results from the our fallibility as humans.

The more steps involved that require human intervention allow for more potential failure points. Yes, if I make a mistake in my automation, then that mistake can be replicated out. But as Deming teaches with the Wheel of Continuous Improvement (Plan, Do, Check, Act), Â we can’t correct a process problem until we have a stable process.

Deming Cycle

Every intervention by a human adds instability to the equation, so first we need to make the process consistent. We do that by standardizing the location(s) of human intervention. Â Those touch points become the areas that we can tweak to further optimize the system. Â I’m getting a bit ahead of myself though.

Let’s continue to look at how organizations tend to deploy systems. Â Organizations tend to have several levels of flexibility in their organizations about how systems are built and provided for use. Â The three main categories I see are:

Automated provisioning from a purpose built image

Install and configure from checklist

Install and configure on demand

Usually, the size of the organization tends to indicate to what level they’ve automated deployments, but that is less true today. Â Larger organizations tend to have more customized and automated deployments. Â It’s mainly been a matter of scale. Â With virtualization and (please forgive me) cloud infrastructures, even smaller organizations can have ever increasing numbers of servers to manage, with admin to server ratios of 1 to hundreds being common and where the number of servers starts to overtake the client OS count.

If we aren’t in a fully automated deployment environment, each server has the potential to be subtly (or not so subtly) unique. Â Checklists and scripts can help with how varied our initial configurations can start out, but each server is like a unique piece of art (or a snowflake).

Try to make more than one of me…

That’s kind of appealing to sysadmins who like to think of themselves as crafters of solutions. Â However, in terms of maintainability, it is a nightmare. Â Every possible deviation in settings can cause problems or irregularities in operations that can be difficult to track down. Â It’s also much more work overall.

What we want our servers to be is like components fresh off the assembly line.

Keeping it consistent

Each server should be consistently stamped out, with minimal deviations, so that troubleshooting across like servers is more consistent. Â Or, even more exciting, if you are experiencing some local problems, refreshing the OS and configuration to a known good state becomes trivial. Â Building the assembly line and work centers can be time consuming up front, but pays off in the long haul.

My Situation:

At Stack Exchange, we are a mix of these categories. Â All of our OS deployments are driven by PXE boot deployments. Â For our Linux systems, we fall into the first group. Â We can deploy an OS and make the addition to our PuppetÂ system, which will configure the box for the designated purpose. Â For our Windows systems, we operate out of the second and third groups. Â We have a basic checklist (about 30-some items) that details the standards our systems should be configured with, but once we get to configuring the server for a specific role, it’s been a bit more chaotic. Â As we’ve migrated to Server 2012 for a web farm and SQL servers, we’ve began to script out our installations for those roles, so they were kind of automated, but in a very one-time run way.

Given where we stood with our Windows deployments and the experience we had with Puppet, we looked at using Puppet with our Windows systems (like Paul Stack – podcast, video) and decided not to go that route (why is probably worthy of another post at another time). Â That was around the time that DSC was starting to peek it’s head out from under the covers of the Server 2012 R2 preview. Â Long story made short, we decided to use DSC to standardize our Windows deployments and bring us parity with our Linux infrastructure in terms of configuration management.

Proposed Solution: Desired State Configuration

DSC offers us a pattern for building idempotent scripts (contained in DSC resources) and offers an engine for marshaling parameters from an external source (in my case a DSC Pull Server, but could be a tool like Chef or some other configuration management product) to be executed on the local machine, as well as coordinating the availability of extra functionality (custom resources). Â I’m building an environment where a deployed server can request it’s configuration from the pull server and reduce the number of touch points to improve consistency and velocity in server deployments.

Next up, I’m going to talk about how I’ve configured my pull server, including step by step instructions to set one up on Server 2012 R2.

This is a free e-book that covers PowerShell Remoting. There’s a brief overview and tutorial of actually using Remoting, but that part isn’t in-depth. What this e-book provides, that you won’t find elsewhere, is step-by-step, screenshot-based instructions for configuring Remoting for any imaginable scenario. You’ll also find troubleshooting tutorials and examples, and even information on how to explain Remoting to your corporate IT security team. It’s all the stuff that isn’t documented in PowerShell’s own help – and it’s completely free. You don’t even need to register to download the file!