Posts in category Ben Gelens

Last Sunday I was going through the MVA course What’s New in Windows Server 2016 Preview. Eventually I ended up at the Windows Containers module and figured it would be nice if I had a Desired State Configuration resource to declaratively deploy containers with, including their initialization script. When I thought about this a bit more, I figured this could result in container automation similar to the dockerfile concept but then in PowerShell DSC configuration DSL. The big benefits being abstraction of lower level configuration and source controlled container recipes!

Since the Windows container tech is fresh of the boat, making its first appearance in Windows Server TP3, don’t expect anything in the Desired State Configuration area released by Microsoft yet. Luckily the Containers PowerShell module and the container specific extensions made to Invoke-Command and Enter-PSSession are working fine. They provide a solid basis to create a custom DSC resource on top off (read more about Container management using PowerShell here).

I started creating a DSC resource module and put in the bare minimum to make it functional. You can find, and contribute if you like, the project on my GitHub here. If you just want to use it, hit the download ZIP file button, unblock the ZIP and expand it to c:\Program Files\WindowsPowerShell\Modules.

A configuration based on this DSC resource can look like this (this one works perfectly):

Create Image from deployed container and create new containers from that

Configure Host NAT rules

When I feel I have a solid enough module created, I’ll publish it to the PS Gallery.

Results:

What you see here is the configuration being loaded into memory. Then the MOF file is generated and applied. As a result, the container is running and configured.

Get-DscConfiguration shows the defined settings.

When we look in the container, we see the nginx process running. Now we need to create a firewall rule and a PAT rule on the host to allow traffic to pass to the container on a specific port. For now, I’ll leave this manually until I figured out a smart way to do this.

When a browser is targeted at the host ip address it should now open the nginx test page.

You might wonder why I did not configure the container runtime with DSC. Turns out, the WinRM service cannot be started (known issue, see here) which makes it impossible for the Local Configuration Manager (LCM) to apply a configuration.

Introduction

The Azure Pack IaaS solution is awesome, we can provide our tenants with a lot of pre-baked solutions in the form of VM Roles.
Tenant users can deploy these solutions without needing the knowledge on how to build the solutions themselves which is a great value add.

But not all customers want pre-baked solutions. Some customers will want to bring their own configurations / solutions with them and they don’t want your pre-baked stuff for multiple reasons (e.g. don’t comply with their standards or requirements). In Azure these customers can make use of the VM extensions. One of the missing pieces of tech in the Azure Pack / SCVMM IaaS solution. It is at this time, very difficult to empower tenant users to bring their own stuff.

In Azure we have a lot of VM extensions available, today I’m going to implement functionality in a VM Role which will behave similarly to the Azure DSC extension (As you probably know by now, I like DSC a lot).

Please note! The implementation will serve as a working example on how you could do this. If you have any questions, please ask them but I will not support the VM Role itself.

Scenario:

A tenant user wants to deploy configurations to their VMs themselves. As a configuration mechanism, the tenant user has chosen Desired State Configuration (DSC). If by any means possible, they want a similar approach in Azure as on your Azure Pack service.

In Azure you can zip your PowerShell script containing your DSC configuration together with the DSC resources it requires. This archive is then uploaded to your Azure Blob storage. The VM DSC Extension picks this archive file up, unpacks it and runs the configuration in the script to generate the MOF file. During this procedure the extension will take in configuration data and take user provided arguments into account.

In our Azure Pack VM Role implementation we will try to mimic all this by letting the tenant user zip up its configuration script together with the DSC resources in the same way as they are used to with the Azure DSC extension. In fact, we will use the Azure PowerShell module to do this. Then, because we don’t have blob storage in our implementation (yet), we assume the tenant user has a web server in place where the tenant user will make this zip file available. On this same location a PSD1 file could be published containing DSC configuration data. Also, the VM Role will take arguments as well.

Prepare a configuration

First let’s create a configuration archive (ZIP file) and a configuration data file (PSD1 file). Then we will stage everything on a web server.

DomainAdministratorCredential=$SafemodeAdminCred# used to check if domain already exists. Domain Administrator will have password of local administrator

DependsOn='[WindowsFeature]ADDS'

}

}

}

The configuration will produce a MOF file which will make sure that the AD-Domain-Services role is installed and then promote the VM to be the first domain controller of a new domain. The domain name and safe mode password are defined as a parameter and thus can be user configurable at MOF file generation time. The password is taken as a string and then converted to a PowerShell credential object. The configuration needs the xActiveDirectory DSC Module and therefore, this module must be packaged up with the script.

So here we are at post number 10, a good number to finalize this series with some closing notes.

I must say, creating this series has taken a lot of energy and time but the reactions I received thus far has made it more than worth it. Thank you community for the great feedback and thank you for reading! I learned a lot writing the series, I hope you did while reading it.

DSC Resources

Microsoft has made a lot of effort in producing and releasing DSC resources to the public. These resources are great but you must know that the prefixed x stands for experimental. This means you don’t get any support and some of them may not be entirely stable or don’t work out the way you think they should work.

I took xComputer for an example in this blog series. A great resource to domain join your computer (and do a lot more) but not suitable in a Pull Server scenario where the computer name is not known up front.

Is this bad? No, I don’t think so. The xComputer resource was probably built with a certain scenario in mind, and in its intended scenario it works just great. If a resource does not work for you, you could still take a look at it and build your own ‘fork’ or you could start from scratch. The modules / resources are written using PowerShell, so if you can read and write PowerShell, you’re covered. Just be creative and you will manage!

Pull Server

Ad-hoc configurations are more dynamic then Pull Server delivered configurations. When using ad-hoc you are able to dynamically populate the configuration document content by adding in configuration data which is gathered / generated on the fly. Even the configuration block itself can contain dynamic elements. The resulting MOF file (configuration document) is created on and tailored for its destination. The downside of this approach is that configurations are done on the fly which can turn out into an ‘oops’ situation more quickly.

Pull Server configurations are more difficult to setup because configuration documents are static and created up front. If you create a single configuration for multiple instances (e.g. web server farm), the configuration should be node name agnostic. The gain here is that configurations are delivered in a more controlled fashion including the required modules. When a configuration change is made, the change can be pulled and implemented automatically.

Beware of using over-privileged accounts

Beware of over-privileged credentials used in configuration documents. Although you have taken all necessary precautions by encrypted sensitive data using certificates, if the box gets owned, the certificate private key is exposed and therefore the credentials have fallen in the wrong hands.

For example: Credentials which interact with AD to domain join should be able to do just that. In a VM Role scenario I would build an SMA runbook to pre-create a computer account as soon as the VM Role gets deployed. A low privileged domain account is then delegated control over the object so it is able to domain join. DSC in this case does not have to create a computer account but can just establish the trust relationship.

VM Role challenges

When dealing with the VM Role and external automation / orchestration, some challenges arise.

There is no (or at least not an easy way) way of coordinating between the VM Role resource extension and DSC configuration state. DSC could potentially reboot the system and go on with configuring after the reboot. It then reboots again and again and again depending on the configuration. The resource extension allows for a script or application to restart the system but treats it as the end of a task. As you don’t know the configuration reboot requirements up front, managing this in the resource extension becomes a pain so you will probably not do this. As a result, the VM Role is provisioned successfully for the tenant user but really is still undergoing configuration.

So VMs will have a provisioned state and become accessible for the tenant user while the VM is still undergoing it’s DSC / SMA configuration. A user can potentially login, shutdown, restart, intervene and thereby disrupt the automation tasks. In case of DSC this is not a big problem as the consistency engine will just keep on going until consistency is reached but if you use SMA for example, well, it becomes a bit difficult.

Another scenario, the user logs in and finds that the configuration he expected is not implemented. Because the user does not know DSC is used, the VM Role is thrown away and the user tries again and again until eventually he is fed up with the service received and starts complaining.

A workaround I use today at some customers is to temporarily assign the VM Role to another VMM user when the VMM deployment job is finished. This removes the VM Role from the tenant users subscription and thereby from their control. The downside here is obvious, the tenant user just experienced a glitch where the VM Role just disappeared and tries to deploy it again. Because the initial name chosen for the Cloud Service is now assigned to another user and subscription, the name is available again so there is a potential for naming conflicts when assigning the finished VM Role back to the original owner.

What’s next?

First I will do a speaker session at the SCUG/Hyper-V.nu event about VM Roles with Desired State Configuration. And no, it will not be a walkthrough of this blog series so much to be done generating the content for this presentation. I think I will blog about the content of my session once I have done it.

Then I will start a new series build upon what we learned in this blog series in the near future. I have many ideas about what could be done but I still have to think about scope and direction for a bit. This series took up a lot more time than I anticipated and I have changed scope many times because I wanted to do just too much. Just for a spoiler for the next series, I know it will involve SMA :-) Stay tuned at www.hyper-v.nu!

In a previous post I talked about why I did not include a domain join in my example DSC configuration:

So why not incorporate a domain join in this configuration? There is a resource available in the Resource Kit which can handle this right?
Yes, there is a resource for this and a domain join would be the most practical example I would come up with as well. But….

The xComputer DSC resource contained in the xComputerManagement module has a mandatory parameter for the ComputerName. As I don’t know the ComputerName up front (the ComputerName is assigned by VMM based on the name range provided in the resource definition), I cannot generate a configuration file up front. I could deploy a VM Role with just 1 instance containing a ComputerName which was pre-defined and used in a configuration document but this scenario is very inflexible and undesirable. In a later post in this series I will show you how to create a DSC Resource yourself to handle the domain join without the ComputerName to be known up front.

In this blog post we will author a DSC resource which handles domain joining without the need to know the ComputerName up front which makes it a suitable resource for the Pull Server scenario described in this series.

WMF 5 Preview release

When I started writing this blog series, Windows Management Foundation 5 (WMF 5) November Preview was the latest WMF 5 installment available for Windows Server 2012 R2.

Then WMF 5 February Preview came along and broke my resource by changing the parameter attributes (e.g. [DscResourceKey()] became [DscProperty(Key)] and [DscResourceMandatory()] became [DscProperty(Mandatory)]). I fixed the resource for the February Preview release (download WMF 5 Feb preview here: http://www.microsoft.com/en-us/download/details.aspx?id=45883).

The DSC resource Class definition is now declared as a “stable design”. Because of this I don’t expect much changes anymore and if a change would be made, repairing the resource should be relatively easy.

I tested my resource locally (by adding it to the module directory directly) and it worked great. I though I had done it, a Pull server scenario friendly resource to handle the domain join without the need to provide the computer name up front using the latest and greatest Class based syntax.

So I prepped the resource to be deployed for the first time via the Pull server and I was hit by a problem. I expected for modules with Class defined resources to just work when delivered via the Pull server. The Pull server itself actually has no problems with them at all but the Web Download Manager component of the LCM is apparently hard wired to check for a valid “script module” structure (at the time of writing using the February Preview).

As a workaround, you could add the Class defined module to the “C:\Program Files\WindowsPowerShell\Modules” directory of your VM Role images directly. This will result in the download of the module to be skipped as it is already present (but you actually don’t want to do this because it is undesirable to maintain DSC resources in an OS image).

To make this post a bit more future proof, I will show you how to author both the Class based module and the script based module. Although you can only use the script based module today, the Class based module should be usable in the near future as well.

In this post the VM Role Resource Definition and Resource Extension that was built in an earlier post will be updated with the additional required steps (3+). Then the VM Role gets deployed and we will look at the result to validate everything works as expected.

Extend the Resource Extension

First off we extend the Resource Extension with some additional steps.We then copy the current resource extension and give it another name so the previous state is safeguarded (if you did not create the VM Role resource definition and extension packages in part 3, you can download them here if you want to follow along: http://1drv.ms/1urL9AM).

Next open the copied resource extension in the VM Role Authoring Tool and increase the version number to 2.0.0.0.

In this post the certificate files used for the configuration document encryption are created. Also an example configuration will be created which will have encrypted sensitive data.

Issue with CNG generated keys

While testing out which certificate template settings would do the job intended for this blog post, I stumbled upon an interesting finding (bug?). Apparently the LCM uses .NET methods for accessing certificate keys. When the certificate keys are generated using the Certificate Next Generation API (see: https://technet.microsoft.com/en-us/library/cc730763(v=ws.10).aspx) the private key is not accessible for the LCM. It is also not visible when using the PowerShell Cert PS Drive.

In this post the PFX Repository website is created which is accessed during VM deployment to download a PFX container belonging to the configuration ID. As a DSC configuration ID can be assigned to many LCM instances simultaneously, the client authentication certificate cannot be used for configuration document encryption purposes as these certificates are unique for each instance.

PFX Website and functionality design

Every configuration document published to the DSC Pull Server will have an associated PFX container containing both the public and private key pairs used to encrypt / decrypt any potential sensitive data included in the document. If the configuration document currently does not have sensitive data, a PFX is issued nonetheless as sensitive data could be added in a later stage.

The PFX Website will be available over HTTPS only and will require client certificate authentication to be accessed. The client authentication certificates assigned to the VMs during deployment will be the allowed certificates.

A unique pin code used for creating and opening a PFX file will be available via the website as well. In my opinion this is a better practice then using a pre-defined pin code for all PFX files. It is still not the ultimate solution but I think the route taken is secure enough for now. If you have suggestions improving this bit please reach out!

The certificate containing the public key will be saved to a repository available to the configuration document generators. For now this will be a local directory.

Prerequisites

The Computer on which the PFX Website gets deployed can either be domain joined or be a workgroup member. In my case I use the DSC Pull Server from the previous post as I don’t have a lot of resources.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the PFX Website on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the PFX Website.

In this (relatively short) post the DSC modules from the DSC resource kit will be added to the DSC Pull Server modules repository. This makes the DSC resources available for download by the LCM instances directly from the Pull Server. When an LCM Pulls it’s configuration it parses the configuration for module information. If the LCM finds it is missing modules or has incorrect versions of them, it will try to download them from the Pull Server. If it can’t get them from the Pull Server, configuration will fail.

Installing the Resource Kit

We will install the modules for the DSC Pull Server itself so they can be used for creating configuration documents later on. I created a little script to handle this process automatically. Let’s take a look:

PowerShell

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

#requires -version 5

param

(

[Parameter(Mandatory=$true,

HelpMessage='Enter a filepath to the Resource Kit Zip File.')]

[ValidateScript({

if((Test-Path-Path$_)-and($_.split('.')[-1])-eq'zip')

{

$true

}

else

{

$false

}

})]

[String]$Path,

[Parameter(HelpMessage='When the Force switch is specified, modules will forcefully be overwritten')]

Write-Verbose-Message"Module $($M.Name) will be overwritten as specified by the Force switch"

}

Remove-Item$DestinationPath-Force-Recurse

Copy-Item-Path$M.FullName-Destination$DestinationPath-Force-Recurse

}

}

else

{

Write-Verbose-Message"Module $($M.Name) will be added"

Copy-Item-Path$M.FullName-Destination$DestinationPath-Force-Recurse

}

}

$ExpandDir|Remove-Item-Force-Recurse

}

So what does the script do?

It verifies if it’s run in at least PowerShell v5. This is required as the Archive CMDlets are available only in V5.

It takes the path to the resource kit zip file as the Path parameter. The path will be validated and as an additional check, it is verified if the file has the .zip file extension. If the path does not exist or the file does not have the zip file extension, script execution is canceled.

It provides the invoker with a force switch which will forcefully overwrite all modules with the resource kit content unless the module which is already on the system is newer (handy if you made a manual module change (which is not the best practice by the way! Create a community copy instead) and want to revert back to the original.

It will create a temporary directory in the Temp folder to expand the resource kit zip file to.

It will unblock the Zip file (unblocking all Zip content with it).

It will expand the resource kit zip file to the temporary directory.

It iterates through every module available in the resource kit and does the following:

It tests if the destination path already exists (which would mean the module is already installed.

If the module already exists, the module on the system and the module from the resource kit are checked for their version.

If the version in the resource kit is newer, the module will be overwritten.

If the version in the resource kit is older, a verbose message will always be printed informing the invoker to manually remove the existing module if so desired (fail save).

If the Force switch was specified while invoking and the currently installed version is not newer, the module will be overwritten by the module from the resource kit.

If the module does not exist on the system yet, it is copied.

Run the script from the console.

When the script is done, you can validate the resources being available by running Get-DSCResource.

In this post the DSC Pull Server will be created and configured with Client Certificate Authentication. Let’s look at the design first.

DSC Pull Server Design

Again as with the PKI solution, we are dealing with a chicken and the egg situation. The company policy (explained in the introduction post) dictates no ad-hoc DSC configurations are allowed. All DSC configurations are only allowed to be deployed via a DSC Pull Server. The DSC Pull Server will only be allowed to pass configuration documents (MOF files) if the LCM requesting such a file is trusted / authenticated. Also the DSC Pull Server itself should be trusted by the LCM instances for client certificate authentication to be available.

The DSC Pull Server website will be configured with an HTTPS binding only and it will be made available on the default HTTPS port (443) so it will be easy to make it available on-premises as well as on the internet. Because multiple websites will eventually be hosted on this server, Server Name Indication (SNI) will be enabled (host headers for HTTPS). The Web site will be configured to require both SSL and client authentication certificates.

The Web application pool associated with the DSC Pull Server website requires anonymous authentication to be available. When this is disabled, the website will actually not function so anonymous authentication will be configured.

Because the setting ‘require client authentication certificates’ on its own accepts client certificates provided by any of the trusted Certificate Authorities known by the webserver, the IIS Client Certificate Mapping component of IIS will be installed as well to restrict it a bit more. A client certificate mapping will be configured for the DSC Pull Server website to map many certificates to one account. The certificates allowed can only be explicitly provided by the Enterprise CA and because of this, an issuer rule will be configured making sure this is the case. An additional deny mapping will be configured to deny all other implicitly ‘trusted’ client certificates (certificates chained to any of the servers trusted Certificate Authority’s).

Prerequisites

The Computer on which the DSC Pull Server gets deployed can either be domain joined or be a workgroup member. In my case I use another domain joined machine for simplicity reasons.

Since the Computer is joined to the same domain as the Enterprise CA, the computer has already have the Enterprise CA certificate in the Computer’s Trusted Root certificate store. If you choose not to deploy the DSC Pull Server on a domain joined computer, you need to import this certificate manually.

A Domain DNS zone should be created which reflects the external FQDN of the DSC Pull Web Service (as with the PKI solution).