Thursday, 22 December 2016

Out of the box, it's not possible to manage a non-domain joined Hyper-V 2012 R2 server from Windows 10. The following steps can be taken to enable WinRM and allow management.

Check the scripts before running them as they will install Hyper-V and the Management tools. If you've already done this you can remove those lines.

Bear in mind this uses NTLM encryption (Negotiate authentication) over HTTP. You will need to check to see if this is secure enough for you. Here is some further reading.Windows Editions UsedServer: Windows Server 2012 R2 (I used the RTM disk, fresh installed no updates)Client: Windows 10 Pro (I used Insider Preview 14965, fresh installed no updates)Server Configuration ScriptClient Configuration ScriptManual Client ConfigurationThis is an additional step to the client configuration script and must be completed for this to work

Click Start > Run > type 'dcomcnfg' > OK

Browse 'Component Services' > 'Computers'

Right Click 'My Computer' and Click 'Properties'

Click the 'COM Security' Tab, then click the 'Edit Limits' button in the 'Access Permissions' section.

Tuesday, 13 December 2016

Part 1 - Pushing a configuration and CredentialsPart 2 - Pull Server SetupPart 3 - Custom ScriptsPart 4 - Partial Configurations.Partial ConfigurationsSometimes, having multiple configurations for a single server could be beneficial. For example, all servers built may follow a single base configuration, with another configuration to install and setup an application.Another example could be the 2 or more teams are responsible for the ultimate setup of a server with each team managing some portion of the configuration. Both teams can run a pull server with their own configurations.To handle this, DSC now supports partial configurations. Partial configurations can be set up using push, pull or some combination of both.In this example, the LCM on the client is configured to pull 2 configurations from the pull server using the ConfigurationID method. ConfigurationID should not be an easily guessable number therefore New-Guid is a good method to generate this.The following is a basic test configuration for the machine using the 2 partial configuration names set up in the LCM configuration. It is using the same $AllNodes configuration data as the above LCM configuration.

In the above configuration, using the same $AllNodes configuration data, the mof files are published to the correct location on the pull server.It is possible to configure this using the Configuration Names method also. Further reading on this topic can be found on MSDN here.

There are 3 sections which need to be completed for this to work.GetScriptThis is required to return a hashtable with a single 'result' key. The value doesn't need to be anything specific as nothing is done with the output. I normally return something to do with the resource or the following:@{ Result = "Not Implemented" }TestScriptThis is used to test to see if the work that the SetScript section implements is already done. For instance if you were creating a certificate, you could use Get-Certificate in here and return $true or $false if the certificate exists or not. This section is required and should output $false if the SetScript section needs to be run and $true if everything is in order.SetScriptThis section does the actual work and is not required to return anything. Below is an example script that I made that will ensure a specific ODBC connection exists. This should really be expanded so that the TestScript section checks each section of the ODBC connection to make sure the server, database and name all match the correct information. For the purpose of this example though, it should demonstrate a working script resource.Variables are not processed at execution time, they are expanded when the .mof file is created. The special $Using:varname syntax is required to expand them at creation time.Throughout the script, I use Write-Verbose so that the text is hidden during normal runs of the configuration.

Monday, 5 December 2016

Part 1 - Pushing a configuration and CredentialsPart 2 - Pull Server SetupPart 3 - Custom ScriptsPart 4 - Partial Configurations.Pull Server Configs - SMB / HTTP / HTTPSA DSC Pull Server can be configured using SMB by creating a fileshare that allows the machine account of the clients to read the share. Simply dropping configurations in the correct format and configuring the LCM will allow clients to pull a configuration from the server. The share should be secured to only allow trusted admins to read and write data.Alternate methods are HTTP and HTTPS which are identical except for TLS. HTTP configuration is not recommended since all communications are done in the clear and configuration data is sensitive. HTTP also opens the possibility for man in the middle attacks.Configure HTTPS pull serverAn HTTPS pull server can be bootstrapped using a DSC configuration with the xWebService DSC module.The full MSDN guide can be found here.The following .ps1 file will create and configure a web server as a DSC pull server on the machine it's run on. This will be using a self signed certificate, so should not be used in production.Once this has completed, it should write on screen the thumbprint for the new self-signed certificate. This is required to configure the client.

Configure HTTPS Pull ClientTo configure the pull client, the LCM on the machine needs to be set up to collect it's configuration from the new pull server. To do this run the following DSC configuration, then apply it with the Set-DSCLocalConfigurationManager command. Make sure to note the certificate thumbprint from the pull server creation script and add it to the client configuration below.

Create and deploy a configuration on the Pull ServerOnce the client is configured, a configuration for it can be created and added to the pull server. There are 2 methods for adding configurations to a pull server. The configuration .mof files can be named the same as the GUID configured in the LCM, or the configurations can be named friendly names and referenced in the LCM. The friendly names require registration of the pull client with the server. Further information can be found here.Configurations held on a pull server also require a checksum to be created for them. This is shown a bit further down.The above LCM configuration uses the GUID method. When it is run it should write to the screen the GUID for the client as configured. Be careful as the script will create a new GUID if run again.To set up a basic configuration to be pulled to the client, create and run the following configuration:Once the mof file is created, run the following script on the pull server to move the mof file to the correct location, name it appropriately and create the required checksum file.

It's now set up for the client to pull the config. You can force the client to pull the config and watch the results with the following command: Update-DscConfiguration -ComputerName dscclient.example.com -Wait -VerboseTo run the config after download run: Start-DscConfiguration -ComputerName dscclient.example.com -Wait -Verbose -Force -UseExistingPart 3 - The Custom Script ResourceAdditional useful commands:Remove pending configuration after seeing the error:Warning "LCM state is changed by non-DSC operations"Remove-DscConfigurationDocument -CimSession 'ComputerName' -Stage Pending -Force

Saturday, 3 December 2016

Part 1 - Pushing a configuration and CredentialsPart 2 - Pull Server SetupPart 3 - Custom ScriptsPart 4 - Partial Configurations.What is PowerShell DSCPowerShell DSC - Desired State Configuration - is a declarative platform for provisioning Windows (and non Windows) machines with configuration information. Under the covers, there is PowerShell code which enforces the configuration on the machine.PowerShell DSC can be used to install and configure software and enforce configuration of the Operating System.PowerShell DSC is idempotent, which means a configuration can be run against a machine over and over and if the machine is already in the described state, then nothing will be changed.Included in-box are 'modules' which contain 'resources' that allow installation of packages and Windows features, copying of files and folders, ensuring registry entries are set to specific values and more.A very basic PowerShell DSC configuration, using the file resource looks a lot like a normal PowerShell function. It can also take parameter values like a function:Running the above configuration creates a .mof file for the 'localhost' computer in the folder C:\DSC\BasicConfig. To push this configuration, execute the following command: Start-DscConfiguration -ComputerName 'localhost' -Path C:\DSC\BasicConfig -wait -verbose -forceDuring execution, DSC will ensure C:\Temp exists. Running the same configuration again will result in no change, since the folder already exists.Omitting the -wait switch will create a PSJob and run the configuration in the background, omitting -verbose reduces the output from the configuration job and -force just tells DSC to ignore any other jobs in progress and force this one to run.

Modules, Resources and Extending PowerShell DSCView installed resources and modules: Get-DscResourceFinding resources on the PowerShell Gallery can be done directly from PowerShell with: Find-ModuleResources can then be installed with: Install-Module
Quick syntax information for a resource: Get-DSCResource -Name User -SyntaxThe PowerShell Gallery hosts a ton of available DSC resources which can be downloaded directly.PowerShell GalleryThe PowerShell GitHub page has source code for many of the first party DSC modules. Viewing the source for a module can aid debugging a configuration considerably.PowerShell GitHub

The Local Configuration Manager (LCM)The LCM runs on all versions of WMF 4 and above, it is responsible for getting and applying the configuration to the machine. The configuration of the LCM can be queried with: Get-DscLocalConfigurationManager

Modes: Push / PullPowerShell DSC supports two modes of getting a configuration to a client. In push mode, the LCM will apply a configuration sent to it using the Start-DscConfiguration command.In pull mode, the LCM is set up to check a server for it's configuration, download and apply it.

Push a basic configuration to another machineDSC Client Config

✔ WinRM needs to be set up and working as a prerequisite to this, to make this easy both machines should be a member of the same domain.

DSC Push Workstation ConfigCreate and execute the following .ps1 fileThis will create a .mof file for dscclient.example.com and save it in C:\DSC\BasicPush.Pushing the configuration can be done with the following command:Start-DscConfiguration -ComputerName 'dscclient.example.com' -Path C:\DSC\BasicPush -wait -force -verbose

Set up certificates for credential usageFor most non-trivial configurations, it's likely that credentials will be required for access to installation packages or AD for example.Passing credentials with DSC requires a special encryption certificate to be created, with the private key installed on the remote client and the public key accessible on the workstation where the configurations are created.Executing a configuration will encrypt the credential with the public key so that the remote machine can decrypt and use it with it's private key. It's worth remembering that anyone with access to the private key can decrypt the credential from the stored mof file. Mof files and private keys should be secured appropriately. Anyone with admin level access on the client will have access to the private key in the certificate store and therefore be able to decrypt the password. Any credentials used should have the minimum amount of rights to get the job done.Credential management appears to be broken between versions 4 and 5 of the WMF. I would therefore advise installing the production release of WMF 5 on Server 2012 R2 and Windows 8.1 to ensure credentials can be encrypted and recovered properly.To create the certificate, you can use Windows PKI and create a special template. I have detailed this process in another blog post here.An interesting alternative, that I haven't tried myself may be to use the xDSCUtils module to bootstrap with self-signed certs.

Push a configuration with credentialsPrerequisites for the client✔ Generate the certificate and add it to the local computer personal store on the client ( cert:\LocalComputer\My )

From your push workstation

✔ Run the following PowerShell script to export the certificate public key to the local machine. This will store the key in a local .cer b64 encoded file. The script will also generate 'configdata' for the push command.

New-DscClient.PS1 -ComputerName 'dscclient.example.com'

✔ Generate the configuration .mof file with the following script

✔ Configure the LCM on the remote machine to use the correct certificate. The LCM configuration information was generated in the mof file using the LocalConfigurationManager section in the DSC-InstallApp.PS1 above. The command below will push the LCM configuration to the remote machine:

PowerShell DSC credential signing requires a specific certificate type. To create this in a Windows PKI environment, I did the following:Log into your PKI Certificate Authority server and open the Certification Authority mmc console.Right click on the Certificate Templates folder and click manage.Right click on the Computer template and click Duplicate.The settings I changed are as follows:Compatibility TabCertification Authority: Windows Server 2012Certificate Recipient: Windows 7 / Server 2008 R2General TabTemplate Display Name: DSC Signing CertificateRequest Handling TabPurpose: EncryptionCryptography TabMinimum key size: 2048 bitsSubject Name TabSubject name format: DNS nameEnsure Microsoft RSA SChannel Cryptographc Provider is the only selected providerExtensions Tab:Application Policies: Remove all entries and add a new policy. Name the policy Document EncryptionEnter the Object identifier: 1.3.1.4.1.311.80.1Key Usage: Click Edit and tick Allow encryption of user dataA useful thing to do at this point is to create an AD security group and add any DSC configured computers to it. Then in the security tab of the template, give the read, enroll and autoenroll permissions. This will automatically create a certificate for each machine in the group. There may be some additional configuration required for this to work which is detailed here.Below are some screenshots of the certificate template creation

Thursday, 24 November 2016

After adding a user or group to the Hyper-V Administrators local group on a host, you are still unable to connect to the host with Windows 10 Hyper-V Manager.The error is as follows:"You do not have the required permission to complete this task. Contact the Administrator of the authorization policy for the computer 'SERVERNAME'."This is due to a change in the way Hyper-V manager connects to the server in Windows 10 / Server 2016. To re-enable the functionality, the user or group needs to be added to the "WinRMRemoteWMIUsers__" and "Hyper-V Administrators" groups. It also needs to be given the "Enable Account" and "Remote Enable" permissions to the root\interop WMI namespace.To do this in the GUI, open Computer Management and add the user or group to the "WinRMRemoteWMIUsers__" group. On 2016 this group doesn't exist, I added the user/group to the "Remote Management Users" group on my 2016 hosts. Also, open "Services and Applications -> WMI Control" properties. Click the security tab, open Root\interop and click the Security button. Add your user or group and check Remote Enable.

Wednesday, 23 November 2016

If you've tried to use Invoke-Command to run commands with credentials on a remote machine and received unexpected Access Denied messages then you may have run across Double Hop Authentication issues.If the command you tried to run needs to pass credentials to a second machine in order to execute, then you will likely receive an Access Denied message like the following.You do not have permission to perform the operation. Contact your administrator if you believe you should havepermission to perform this operation. + CategoryInfo : PermissionDenied: (:) [Get-VM], VirtualizationOperationFailedException + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.GetVMCommand + PSComputerName : HYPERV1The most recent example I've seen is when attempting to run System Center Configuration Manager PowerShell commands on a remote machine using Jenkins. The error message above isn't much help, but since I'm running Invoke-Command with the -Credentials and -Computer parameters - and then trying to authenticate to a further machine - 'Double Hop' is probably the issue.Another example is trying to run something like the following:Invoke-Command -Computer hyperv1 -Credential $Cred -ScriptBlock { Get-VM -ComputerName hyperv2 }The server hyperv1 will attempt to authenticate to hyperv2, but it is not authorised to cache and forward the credentials.

How can I fix it?In order to be able to pass credentials via a remote machine to another machine, you need to configure CredSSP (Credential Security Service Provider.) This does have security issues, since you are trusting the remote machine to cache and re-send your credentials to the second machine. You should only configure this for machines you fully trust.To configure your machine to use CredSSP perform the following steps in an administratively elevated PowerShell console.Client Steps: Enable-WSManCredSSP -Role "Client" -DelegateComputer "server.domain.com"Server Steps: Enable-WSManCredSSP -Role "Server"If you haven't already configured an HTTPS listener on the server you can do so with this command. winrm quickconfig -transport:httpsThere are pre-requesites to this such as having a valid not self-signed certificate for the FQDN of the server machine.Once completed you should now be able to re-run your command specifying -Authentication CredSSPAn example of the Hyper-V command that didn't work before is:Invoke-Command -Computer hyperv1 -Credential $Cred -ScriptBlock { Get-VM -ComputerName hyperv2 } -Authentication CredSSPFurther reading on the PowerShell command for configuring this here.The classic way to configure this is detailed on MSDN here.

Monday, 21 November 2016

In previous posts, I showed how to configure the basics for using Terraform on Azure Resource Manager and also how to set up WinRM over HTTPS for configuring the servers once built,In this post I take the configuration a step further and create a Load Balancer with an Availability Set. I use the load balancer public IP to NAT into the VMs using WinRM to execute a Powershell DSC script to install the IIS feature.Here is a sample of the NAT rule used:The VM and it's components use the 'count' property in Terraform in order to build multiple VMs of the same configuration. Whenever the VM's individual properties are required ${count.index}can be used to reference the specific object within the configuration. In the above gist, I use "${count.index + 10000}"to assign a unique WinRM port on the load balancer for each VM.The configuration of the load balancer requires a field in the NIC for each VM which adds it into the load balancer's back end network.Within the load balancer file there are configurations for it's public IP, front and back-end of the load balancer, a couple of rules for web traffic and a probe to check which machines are functional. Here is a the load balancer configuration:The full source for this can be downloaded from GitHub here.

Friday, 18 November 2016

Checking In helps you record your flexi time or billable hours on site. Set a location using the map and Checking In will record all the time you spend on location.

Checking in can show a notification when you arrive or leave a location.

You can view your location history in the application, by syncing with a Google Calendar or by sending a .CSV file with your times, dates and locations.

To sync with Google Calendar, it is suggested to create a separate named calendar in your account so that checking in events are kept separate from your normal events and can be managed separately. You can do this in the Google Calendar web application at https://calendar.google.com

Wednesday, 16 November 2016

Now that the title is out of the way, I'll get on with explaining how I got this working. I tried several ways of getting the WinRM service configured to use HTTPS in Azure Resource Manager using Terraform.I explained in a previous post how to get the basics up and running.There appear to be a few ways to do this in Terraform but I've only found one that works. I attempted to use the built in WinRM configuration option, but this requires creating a certificate locally and then uploading it to an Azure Key Vault. This sounds like a good option, but it can't yet be done purely in Terraform (I think!)I've also tried creating a self signed certificate on the fresh build VM using the Windows FirstLogonCommands. This proved difficult due to all sorts of timing and character interpolation issues.The working option is to create a PowerShell script, add in some variables from Terraform and then inject that script into the VM at creation time using the custom_data field as part of the os_profile section.First, I created the Deploy.ps1 with no parameters which will create the local self-signed certificate and setup WinRM and a firewall rule.Second, I created the FirstLogonCommands.xml which gets inserted into the Windows unattend.xml and runs the commands at first logon.Third, in the Virtual Machine configuration of the .tf file, the vm is configured to inject the Deploy.ps1 data into the VM with parameters from Terraform. The VM is configured to automatically log on once which runs the FirstLogonCommands. This should then rename the custom_data blob back to Deploy.ps1, run it and configure WinRM!The full example can be downloaded from my GitHub.Part 3, Configuring an Azure RM load balancer is here.

In much the same way as Terraform, you will need to create an application in Azure and assign permissions so that Packer can create resources to build your image.

Unfortunately the official packer script for auth setup doesn't work for me and the commands on the site are specific to Linux. I created a PowerShell script that will create the relevant objects in Azure for authorisation. If you want to manually create these objects you will need to follow the packer documentation here.

Once the API keys have been loaded into environment variables, I created the an example Windows build json file. An important thing to note is the winrm communicator section in the file, without it the build will fail. There is also an official packer example file here.

Packer requires a resource group and storage account to store your completed images. These can be created any way you choose but will need to be there for the build to complete successfully.

Once the file has been created, run the build with packer build .\windows-example.jsonPacker should then build a keystore and VM in a temporary resource group, run any provisioners configured and then output build artifacts ready to use as a vm template. Once the template is created, you will be given a path to a vhd and an Azure json template for building VMs.All scripts and files can be found on my GitHub.Next step for this is to use Terraform to build a VM from the custom image.

Wednesday, 2 November 2016

I was intrigued by Terraform for building full environments in Azure. I searched for an example but only found a classic Azure set of Terraform files here. As such, I set about creating an example set to build a small amount of resources in Azure RM using Terraform.Set up your Azure RM credentialsBefore you can deploy any resources in Azure RM you need to set up your Azure credentials with Terraform. I followed the full RM portal guide at the Terraform site and was unable to select my custom application to add the role. Once I followed the guide at the Microsoft site using the classic portal I was able to find my custom app in Azure AD and successfully completed the Terraform guide.

I am using Powershell as my console, so I set my environment variables as such:

Once the API keys have been loaded into environment variables, I created the following files. There is a file for each major resource component. Terraform seems to handle the dependencies automatically so no ordering is required.The next step for this is to pull all of the resource names into a Variables.tf file so they can be set at build time. Also this configuration doesn't include a public IP for the VM so it's fairly useless until that's added. I'll update this post once I've added the IP address.

Thursday, 20 October 2016

You can use the pipeline to reduce data sets and view important information in powershell.Take this line of vSphere PowerCLI for example:$DStores | Select Name, FreeSpaceGB, CapacityGBThis will present some information on the datastores.Wouldn't it be nice to be able to get the percent free on each of the datastores too? You can do this by transforming the properties using an expression as below:$DStores | Select Name, FreeSpaceGB, CapacityGB, @{Name="FreePercent"; Expression={(($_.FreeSpaceGB/$_.CapacityGB)*100}} | Sort -Property FreePercent -DescendingThis will create a new property in the pipeline called FreePercent that can be operated on in the same way as a normal property. In this example I sort using the new property.

Tuesday, 11 October 2016

Azure-VM-Snapshots

-------------------------------------Powershell Functions for Creating Azure RM VM SnapshotsUsage:Load the Functions:C:> . .\AzureSnapFunctions.PS1Create a New snapshot for all VHDs in a VMC:> Snap-AzureRMVM -VMName MyVM -SnapshotName "Snap 1"View the snapshots for all VHDs on a VMC:> Get-AzureRMVMSnap -VMName MyVMDelete all snapshots for all VHDs on a VMC:> Delete-AzureRMVMSnap -VMName MyVMRevert to a snapshot for all VHDs on a VMC:> Revert-AzureRMVMSnap -VMName MyVM

Friday, 30 September 2016

This is part on of a series of posts on the new Azure Recovery Services in Azure Resource Manager

There are 4distinct offerings as part of Azure Recovery Services.

File and Folder Backup – This Post

Azure Backup Server – Part 2 (Coming Soon)

Site Recovery – Part 3 (Coming Soon)

Azure VM Backup – Part 4 (Coming Soon)

This post will cover the basic file and folder backup offering for Windows machines. Parts 2 and 3 will cover more advanced backup scenarios and site fail over.

The file and folder backup agent is quite limited in scope and does not protect system state or any live application data such as SQL Server, Hyper-V, SharePoint or Exchange. This offering is simply to backup a few files on a system in much the same way as DropBox or OneDrive folder sync does.

It seems that this product is suited to desktops and laptops that house user data and need the data backed up on a daily basis.

The backup agent requires internet access but this can be configured via a proxy server.

Setup of the Agent - If you aren't interested in setup, skip to the bottom for costs and thoughts

Start by creating a new Recovery Services vault in the Azure Portal

Once created, open the resource and browse through to "Getting Started," > "Backup."

Select Files and Folders and click OK. Note if you select any other option it will suggest downloading Azure Backup Server which is a cut down version of System Center Data Protection Manager.

The wizard now shows to download the Recovery Services Agent and Vault Credentials to configure the backup.

Download both the relevant client and credential file and complete the installation wizard on your machine. The installation is a simple 5 screen wizard which automatically installs the prerequisites.

Once installed the registration wizard begins and allows you to select the downloaded credential file

On the encryption page, select a folder to save your encryption phrase, generate a new phrase or enter one.

Backup and Restore

Once installed you are presented with the familiar Windows Server Backup screen with a few additions:

When creating a backup you can only select files or folders

The backup schedule allows between 3 backups per day and 1 every 4 weeks.

The retention policy is pretty thorough

It’s possible to send an initial seed backup by post

A warning is shown that the volume size limit for backups is 54400 GB which seems way larger than would be expected for this type of backup client.

Once the backup is created there are various options in the right pane

I hit backup now to begin my initial backup

Backup Timings

My base server 2012 R2 image is using 18.5 GB on disk

Initial backup took 38 Minutes to complete the initial seed over a 100 Mbps connection.

This works out at around 3.5 MB/sec (28 Mbps) or 12.6 GB/Hr. backup speed.

I copied over a 1 GB ISO file and ran a second backup.

And just a regular backup with no changes on the server

Backup Log location can be found in the portal. Unfortunatelywhen I checked on the server the log was not in the file system.

Restores

Restores can be done by launching the wizard from the right hand pane. You can choose to recover from this server or another server using a vault credential file.

You can browse or search for files. Browsing allows you to select the date and file for recovery. Recovery supports restoring ACLs of original files and has overwrite options.

Recovery of a 1 GB ISO file took just 3 mins on a 100 Mbps connection. The restore actually saturated the 100 Mbps connection.

Storage transaction costs are not charged, you pay a base cost for the machine and then per GB.

LRS and GRS storage is available so backups can be kept in 1 or 2 datacenters

True backup and not file sync – Human error can lead to file deletion on normal file sync services. Crypto lockers could possibly affect files on file sync services if versioning is not set up correctly.

Cons

Unable to backup system state

No central management

Not cheap compared to Microsoft's own OneDrive offering

Final Thought – Unless you need the customisable retention for compliance or you need to backup your data to multiple locations. The simplicity and cost of the OneDrive and DropBoxstandard offerings seem like the obvious choice for simple file and folder backups. You should be careful to read these services options for deleted file retention and file versioning to protect against deletion, crypto-lockers or corruption.