So here we are at post number 10, a good number to finalize this series with some closing notes.

I must say, creating this series has taken a lot of energy and time but the reactions I received thus far has made it more than worth it. Thank you community for the great feedback and thank you for reading! I learned a lot writing the series, I hope you did while reading it.

DSC Resources

Microsoft has made a lot of effort in producing and releasing DSC resources to the public. These resources are great but you must know that the prefixed x stands for experimental. This means you don’t get any support and some of them may not be entirely stable or don’t work out the way you think they should work.

I took xComputer for an example in this blog series. A great resource to domain join your computer (and do a lot more) but not suitable in a Pull Server scenario where the computer name is not known up front.

Is this bad? No, I don’t think so. The xComputer resource was probably built with a certain scenario in mind, and in its intended scenario it works just great. If a resource does not work for you, you could still take a look at it and build your own ‘fork’ or you could start from scratch. The modules / resources are written using PowerShell, so if you can read and write PowerShell, you’re covered. Just be creative and you will manage!

Pull Server

Ad-hoc configurations are more dynamic then Pull Server delivered configurations. When using ad-hoc you are able to dynamically populate the configuration document content by adding in configuration data which is gathered / generated on the fly. Even the configuration block itself can contain dynamic elements. The resulting MOF file (configuration document) is created on and tailored for its destination. The downside of this approach is that configurations are done on the fly which can turn out into an ‘oops’ situation more quickly.

Pull Server configurations are more difficult to setup because configuration documents are static and created up front. If you create a single configuration for multiple instances (e.g. web server farm), the configuration should be node name agnostic. The gain here is that configurations are delivered in a more controlled fashion including the required modules. When a configuration change is made, the change can be pulled and implemented automatically.

Beware of using over-privileged accounts

Beware of over-privileged credentials used in configuration documents. Although you have taken all necessary precautions by encrypted sensitive data using certificates, if the box gets owned, the certificate private key is exposed and therefore the credentials have fallen in the wrong hands.

For example: Credentials which interact with AD to domain join should be able to do just that. In a VM Role scenario I would build an SMA runbook to pre-create a computer account as soon as the VM Role gets deployed. A low privileged domain account is then delegated control over the object so it is able to domain join. DSC in this case does not have to create a computer account but can just establish the trust relationship.

VM Role challenges

When dealing with the VM Role and external automation / orchestration, some challenges arise.

There is no (or at least not an easy way) way of coordinating between the VM Role resource extension and DSC configuration state. DSC could potentially reboot the system and go on with configuring after the reboot. It then reboots again and again and again depending on the configuration. The resource extension allows for a script or application to restart the system but treats it as the end of a task. As you don’t know the configuration reboot requirements up front, managing this in the resource extension becomes a pain so you will probably not do this. As a result, the VM Role is provisioned successfully for the tenant user but really is still undergoing configuration.

So VMs will have a provisioned state and become accessible for the tenant user while the VM is still undergoing it’s DSC / SMA configuration. A user can potentially login, shutdown, restart, intervene and thereby disrupt the automation tasks. In case of DSC this is not a big problem as the consistency engine will just keep on going until consistency is reached but if you use SMA for example, well, it becomes a bit difficult.

Another scenario, the user logs in and finds that the configuration he expected is not implemented. Because the user does not know DSC is used, the VM Role is thrown away and the user tries again and again until eventually he is fed up with the service received and starts complaining.

A workaround I use today at some customers is to temporarily assign the VM Role to another VMM user when the VMM deployment job is finished. This removes the VM Role from the tenant users subscription and thereby from their control. The downside here is obvious, the tenant user just experienced a glitch where the VM Role just disappeared and tries to deploy it again. Because the initial name chosen for the Cloud Service is now assigned to another user and subscription, the name is available again so there is a potential for naming conflicts when assigning the finished VM Role back to the original owner.

What’s next?

First I will do a speaker session at the SCUG/Hyper-V.nu event about VM Roles with Desired State Configuration. And no, it will not be a walkthrough of this blog series so much to be done generating the content for this presentation. I think I will blog about the content of my session once I have done it.

Then I will start a new series build upon what we learned in this blog series in the near future. I have many ideas about what could be done but I still have to think about scope and direction for a bit. This series took up a lot more time than I anticipated and I have changed scope many times because I wanted to do just too much. Just for a spoiler for the next series, I know it will involve SMA :-) Stay tuned at www.hyper-v.nu!

In a previous post I talked about why I did not include a domain join in my example DSC configuration:

So why not incorporate a domain join in this configuration? There is a resource available in the Resource Kit which can handle this right?
Yes, there is a resource for this and a domain join would be the most practical example I would come up with as well. But….

The xComputer DSC resource contained in the xComputerManagement module has a mandatory parameter for the ComputerName. As I don’t know the ComputerName up front (the ComputerName is assigned by VMM based on the name range provided in the resource definition), I cannot generate a configuration file up front. I could deploy a VM Role with just 1 instance containing a ComputerName which was pre-defined and used in a configuration document but this scenario is very inflexible and undesirable. In a later post in this series I will show you how to create a DSC Resource yourself to handle the domain join without the ComputerName to be known up front.

In this blog post we will author a DSC resource which handles domain joining without the need to know the ComputerName up front which makes it a suitable resource for the Pull Server scenario described in this series.

WMF 5 Preview release

When I started writing this blog series, Windows Management Foundation 5 (WMF 5) November Preview was the latest WMF 5 installment available for Windows Server 2012 R2.

Then WMF 5 February Preview came along and broke my resource by changing the parameter attributes (e.g. [DscResourceKey()] became [DscProperty(Key)] and [DscResourceMandatory()] became [DscProperty(Mandatory)]). I fixed the resource for the February Preview release (download WMF 5 Feb preview here: http://www.microsoft.com/en-us/download/details.aspx?id=45883).

The DSC resource Class definition is now declared as a “stable design”. Because of this I don’t expect much changes anymore and if a change would be made, repairing the resource should be relatively easy.

I tested my resource locally (by adding it to the module directory directly) and it worked great. I though I had done it, a Pull server scenario friendly resource to handle the domain join without the need to provide the computer name up front using the latest and greatest Class based syntax.

So I prepped the resource to be deployed for the first time via the Pull server and I was hit by a problem. I expected for modules with Class defined resources to just work when delivered via the Pull server. The Pull server itself actually has no problems with them at all but the Web Download Manager component of the LCM is apparently hard wired to check for a valid “script module” structure (at the time of writing using the February Preview).

As a workaround, you could add the Class defined module to the “C:\Program Files\WindowsPowerShell\Modules” directory of your VM Role images directly. This will result in the download of the module to be skipped as it is already present (but you actually don’t want to do this because it is undesirable to maintain DSC resources in an OS image).

To make this post a bit more future proof, I will show you how to author both the Class based module and the script based module. Although you can only use the script based module today, the Class based module should be usable in the near future as well.

In this post the VM Role Resource Definition and Resource Extension that was built in an earlier post will be updated with the additional required steps (3+). Then the VM Role gets deployed and we will look at the result to validate everything works as expected.

Extend the Resource Extension

First off we extend the Resource Extension with some additional steps.We then copy the current resource extension and give it another name so the previous state is safeguarded (if you did not create the VM Role resource definition and extension packages in part 3, you can download them here if you want to follow along: http://1drv.ms/1urL9AM).

Next open the copied resource extension in the VM Role Authoring Tool and increase the version number to 2.0.0.0.

In this post the certificate files used for the configuration document encryption are created. Also an example configuration will be created which will have encrypted sensitive data.

Issue with CNG generated keys

While testing out which certificate template settings would do the job intended for this blog post, I stumbled upon an interesting finding (bug?). Apparently the LCM uses .NET methods for accessing certificate keys. When the certificate keys are generated using the Certificate Next Generation API (see: https://technet.microsoft.com/en-us/library/cc730763(v=ws.10).aspx) the private key is not accessible for the LCM. It is also not visible when using the PowerShell Cert PS Drive.

In this post the PFX Repository website is created which is accessed during VM deployment to download a PFX container belonging to the configuration ID. As a DSC configuration ID can be assigned to many LCM instances simultaneously, the client authentication certificate cannot be used for configuration document encryption purposes as these certificates are unique for each instance.

PFX Website and functionality design

Every configuration document published to the DSC Pull Server will have an associated PFX container containing both the public and private key pairs used to encrypt / decrypt any potential sensitive data included in the document. If the configuration document currently does not have sensitive data, a PFX is issued nonetheless as sensitive data could be added in a later stage.

The PFX Website will be available over HTTPS only and will require client certificate authentication to be accessed. The client authentication certificates assigned to the VMs during deployment will be the allowed certificates.

A unique pin code used for creating and opening a PFX file will be available via the website as well. In my opinion this is a better practice then using a pre-defined pin code for all PFX files. It is still not the ultimate solution but I think the route taken is secure enough for now. If you have suggestions improving this bit please reach out!

The certificate containing the public key will be saved to a repository available to the configuration document generators. For now this will be a local directory.

Prerequisites

The Computer on which the PFX Website gets deployed can either be domain joined or be a workgroup member. In my case I use the DSC Pull Server from the previous post as I don’t have a lot of resources.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the PFX Website on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the PFX Website.

In this (relatively short) post the DSC modules from the DSC resource kit will be added to the DSC Pull Server modules repository. This makes the DSC resources available for download by the LCM instances directly from the Pull Server. When an LCM Pulls it’s configuration it parses the configuration for module information. If the LCM finds it is missing modules or has incorrect versions of them, it will try to download them from the Pull Server. If it can’t get them from the Pull Server, configuration will fail.

Installing the Resource Kit

We will install the modules for the DSC Pull Server itself so they can be used for creating configuration documents later on. I created a little script to handle this process automatically. Let’s take a look:

PowerShell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

#requires -version 5

param

(

[Parameter(Mandatory=$true,

HelpMessage='Enter a filepath to the Resource Kit Zip File.')]

[ValidateScript({

if((Test-Path-Path$_)-and($_.split('.')[-1])-eq'zip')

{

$true

}

else

{

$false

}

})]

[String]$Path,

[Parameter(HelpMessage='When the Force switch is specified, modules will forcefully be overwritten')]

Write-Verbose-Message"Module $($M.Name) will be overwritten as specified by the Force switch"

}

Remove-Item$DestinationPath-Force-Recurse

Copy-Item-Path$M.FullName-Destination$DestinationPath-Force-Recurse

}

}

else

{

Write-Verbose-Message"Module $($M.Name) will be added"

Copy-Item-Path$M.FullName-Destination$DestinationPath-Force-Recurse

}

}

$ExpandDir|Remove-Item-Force-Recurse

}

So what does the script do?

It verifies if it’s run in at least PowerShell v5. This is required as the Archive CMDlets are available only in V5.

It takes the path to the resource kit zip file as the Path parameter. The path will be validated and as an additional check, it is verified if the file has the .zip file extension. If the path does not exist or the file does not have the zip file extension, script execution is canceled.

It provides the invoker with a force switch which will forcefully overwrite all modules with the resource kit content unless the module which is already on the system is newer (handy if you made a manual module change (which is not the best practice by the way! Create a community copy instead) and want to revert back to the original.

It will create a temporary directory in the Temp folder to expand the resource kit zip file to.

It will unblock the Zip file (unblocking all Zip content with it).

It will expand the resource kit zip file to the temporary directory.

It iterates through every module available in the resource kit and does the following:

It tests if the destination path already exists (which would mean the module is already installed.

If the module already exists, the module on the system and the module from the resource kit are checked for their version.

If the version in the resource kit is newer, the module will be overwritten.

If the version in the resource kit is older, a verbose message will always be printed informing the invoker to manually remove the existing module if so desired (fail save).

If the Force switch was specified while invoking and the currently installed version is not newer, the module will be overwritten by the module from the resource kit.

If the module does not exist on the system yet, it is copied.

Run the script from the console.

When the script is done, you can validate the resources being available by running Get-DSCResource.

In this post the DSC Pull Server will be created and configured with Client Certificate Authentication. Let’s look at the design first.

DSC Pull Server Design

Again as with the PKI solution, we are dealing with a chicken and the egg situation. The company policy (explained in the introduction post) dictates no ad-hoc DSC configurations are allowed. All DSC configurations are only allowed to be deployed via a DSC Pull Server. The DSC Pull Server will only be allowed to pass configuration documents (MOF files) if the LCM requesting such a file is trusted / authenticated. Also the DSC Pull Server itself should be trusted by the LCM instances for client certificate authentication to be available.

The DSC Pull Server website will be configured with an HTTPS binding only and it will be made available on the default HTTPS port (443) so it will be easy to make it available on-premises as well as on the internet. Because multiple websites will eventually be hosted on this server, Server Name Indication (SNI) will be enabled (host headers for HTTPS). The Web site will be configured to require both SSL and client authentication certificates.

The Web application pool associated with the DSC Pull Server website requires anonymous authentication to be available. When this is disabled, the website will actually not function so anonymous authentication will be configured.

Because the setting ‘require client authentication certificates’ on its own accepts client certificates provided by any of the trusted Certificate Authorities known by the webserver, the IIS Client Certificate Mapping component of IIS will be installed as well to restrict it a bit more. A client certificate mapping will be configured for the DSC Pull Server website to map many certificates to one account. The certificates allowed can only be explicitly provided by the Enterprise CA and because of this, an issuer rule will be configured making sure this is the case. An additional deny mapping will be configured to deny all other implicitly ‘trusted’ client certificates (certificates chained to any of the servers trusted Certificate Authority’s).

Prerequisites

The Computer on which the DSC Pull Server gets deployed can either be domain joined or be a workgroup member. In my case I use another domain joined machine for simplicity reasons.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the DSC Pull Server on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the DSC Pull Web Service (as with the PKI solution).

Because there is already a lot of content about creating VM Roles I won’t go into depth in this series. I will show every step I took in creating the VM Role but I won’t elaborate on why I use a certain property types and how this reflects in the user experience, etc.

Windows Image

As a prerequisite you need a sysprepped Windows Server 2012 R2 VHDX file in the SCVMM library. The Image should have the latest WMF5 preview (or RTM when its released) installed. I follow Azure in deploying WMF5 preview because some of the DSC resources already require it.

The Image should have been configured with a familyname, a version and with the some tags: WindowsServer2012, R2 and WMF5. You can assign tags using PowerShell only. Run the following command replacing values with your own.

In this post, a brand new PKI tailored to fit the needs of this blog series will be implemented. Let’s take a look at the “design” first.

PKI Design

The PKI solution described in this post will be implemented solely to support the VM Role DSC implementation as described in part 1. The design focuses on functionality first. PKI best practices and security guidelines are not part of the design goals (at least not explicitly).

The PKI will exists out of the following components:

Enterprise Root Certificate Authority

CDP / AIA Website

Web Enrollment Policy Web Service (CEP)

Web Enrollment Web Service (CES)

To make sure the impact on a potentially already existing Enterprise Root CA will be minimized, a new Enterprise Root CA will be deployed. This will make sure the CA is implemented in such a way that it is workable for the scenario where the DSC Pull Client is not (or not yet) a member of the Active Directory Domain. A Standalone CA cannot be used as Web Enrollment Services will be implemented which requires an Enterprise CA.

Because the DSC Pull Client does not have to be a member of the domain, the CDP and AIA locations added to issued certificates will be targeted to web locations only. This will prevent unnecessary lookups against the Active Directory CDP and AIA containers to which the client does not have access at the time of deployment or will not ever have access in the case it won’t be joined to the domain. Plus this setup will make sure the CA’s CRL and AIA locations are accessible publicly as well. For more info about CDP and AIA see: http://technet.microsoft.com/en-us/library/cc776904(v=ws.10).aspx.

The websites for CDP / AIA and Web Enrollment Services will be resolvable using public DNS information. For this scenario external DNS is simulated by adding a Public DNS name zone to the internal DNS. The web address for the Pull Server will eventually be resolvable via this method as well as it too should be available internally and externally.

All endpoints required for the VM Role with DSC Pull Server integration scenario described in this series are web based. Because of this, the endpoints are easily made accessible when utilizing network virtualization or when deploying DSC Pull clients to a public cloud service (e.g. Azure or ISP) using other / similar mechanisms as the VM Role. When comparing to the Azure DSC VM Extension, it depends on internet connectivity as it need to get it’s configuration document and payload from Azure blob storage. Blob storage is external from the VM which is why the Extension will do a web call to acquire it. The public accessibility of services (which I think will get more common as we venture further into distributed deployments over private, hybrid and public cloud environments) make it ever more crucial for proper encryption and authentication mechanisms to be in place.

Web Enrollment Services will be used to provide access to Enterprise CA functionality (Enrollment based on templates) without the need for the requestor (client) to directly talk to the CA / Active Directory or be a member of the Active Directory. Instead, access to the Enrollment Services is handled over HTTPS and the Enrollment Services act as a protocol proxy on behalf of the requestor. The Web Enrollment Services can be implemented in a variety of authentication configurations. Again, because of the clients requesting the certificate do not per se belong to the Active Directory domain, authentication will be configured to require a username password pair instead of Kerberos or client certificates. The Web Enrollment Services will make use of a certificate with server authentication EKU to provide for an HTTPS binding. The certificate will be issued by the Enterprise Root CA deployed in this scenario. For more information about Certificate Enrollment Web Services see: http://technet.microsoft.com/en-us/library/hh831822.aspx

Prerequisites

To start implementing the PKI there should already be an Active Directory domain available as well as a domain joined computer installed with Windows Server 2012 R2, which will be the platform for the CA installation. Two DNS name zones should be created which reflects the external FQDN’s (for this example I’ll be using hyper.v.nu domain).

cdp.domain.tld (cdp.hyperv.nu)

webenroll.domain.tld (webenroll.hyperv.nu)

Both zones will have an A record with an empty hostname pointing to the IP address of the domain joined computer on which the CA will be installed. The names can be anything you like. I create two zones so it will not conflict with the Public DNS information. For example: as the zone is authoritative only for the cdp.domain.tld, information about domain.tld will still be queried from the internet.

You could off course use a Public DNS instead so you actually implement an end to end solution without the need for emulation.

Introduction

I started this blog series because I strongly believe in Desired State Configuration (DSC) and find the lack of native support / exposure in the vCurrent System Center suite a bit disappointing.

When System Center 2012 R2 was announced, and specifically SCVMM, there was this emphasis on DSC support being available. I never figured out what was actually meant with this statement however. I just could not seem to find any native way of integrating with DSC.

Here at 44 minutes in, DSC integration is shown as part of the VM Role resource extension.

Not much integration if you ask me because it’s up to the scripting skills and creativity of the one developing the Resource Extension and not provided natively at the VMM level.

In this blog series I will look at how to target VM Roles at the DSC Pull server. Why choose the Pull Server scenario you ask? I figure the DSC Pull server is a crucial part of an Enterprise DSC adoption / implementation / strategy (controlled deployments instead of ad-hoc). Since I work mostly at Enterprise customers, it make sense to look at the Pull server.

For this blog series I imagine a company to have a DSC deployment policy in place which dictates a few rules:

No ad-hoc (Start-DscConfiguration) DSC configurations are allowed in production.

Making use of controlled releases for production systems via DSC Pull server is mandatory.

Configuration documents (MOF files) are not allowed to have unencrypted sensitive information.

Local Configuration Manager (LCM) instances are allowed to connect with the DSC Pull server only if they are trusted / authenticated.

Why have this imaginary policy? Because first, I think my customers want release management to be in place. Minimizing risk for production environments always has high priority. Second, I know my customers want a decent level off security for their environment.

Customers are evolving their delivery methods to be agile. They want to adopt DevOps to better support there delivery methods but at the same time these customers have issues letting go of their well-established processes (ITIL). This is where the DSC Pull server infrastructure comes into play. DSC configurations are developed in a DEV environment using either another Pull Server or ad-hoc configurations. When fully tested, these configurations will be made available via the Pull server for production systems (in other words: controlled releases).