Thursday, March 27, 2014

I normally don’t blog about something that I consider to be a bug, but in this case the failure behavior is difficult to piece together so I thought I would.

Windows Azure Pack Gallery items have the ability to have multiple credentials defined within them. And these are used for various actions within the application scripts that are defined within the Gallery Item.

There are two credentials are are essentially ‘built in’ – the local administrator and the domain join user.

If you use the VM Role Author – these credentials are defined automatically. If you spend time playing around with the WAP Tenant API – you will see that these credentials are labeled as ‘intrinsic’ settings on the object.

The local administrator account you cannot avoid – this sets the local administrator password on the VM and it is required by WAP. The user ‘administrator’ is grayed out, so you have to set the password for the local administrator.

The domain join user is something you cannot avoid if you define that your VM will join a domain.

Your Gallery Item can have additional user credentials as well. Say a special one that is used to configure an application, or you have a script that adds a user to the local administrators group, or you need to perform some action against a remote SQL Server and need the proper user credentials.

Now – defining user accounts – there are two format options: domain\username and username@domain.

Well guess what – for the domain join username, you can only use the domain\username format. If you attempt to use username@domain you will see that the The provisioning of the VM fails.

The failure message in the SCVMM job log is:

Warning (22044)

One or more virtual machines have failed during customization during the deployment of the service.

Nothing is clear until I look at the unattended.xml that is generated by the SCVMM deployment process to apply to the VM.

Does this error also apply if a user types the username@domain format in the GUI for the domain join credential when deploying my Gallery Item? I just tried it – the username@domain format also fails in the GUI.

Now, I mentioned other credentials – not the ‘special’ credential that is the domain join user. Can these credentials be defined using username@domain? Yes, yes they can. Those credentials can be defined using either format.

There are a few fail-safe features I added. Such as testing the paths, and validating that Hyper-V is there in case the dependencies are left out of the configuration. At the same time, I didn’t enable defining the migration network or the security model. I had to start with something useful, and I do have a day job so I could not make it too large.

So, just the xHyper-V – you download the cHyper-V and unzip it to C:\Program Files\WindowsPowerShell\Modules so that PowerShell will automatically load it.

To use it, here is a sample configuration that builds on my other two posts:

And there you have it, you have migration enabled, and you could enable enhanced session mode, and you have new default storage paths since you could have an SMB share, or iSCSI storge or some big LUN sitting there that you want to default to.

And if you really want to get sneaky. Modify these settings prior to creating your VMs using the other xHyper-V modules.

Monday, March 17, 2014

In my last post I was really simple, I added the Hyper-V Role and the Hyper-V PowerShell module to a Server 2012 R2 machine using the Server Manager Desired State Configuration provider (this is a capability that is provided in-box).

Lets step it up a bit.

In the community resources MSFT has released the xHyper-V Module as a component of the Desired State Configuration Resource Kit.

If you follow the instructions you will see that adding this to your Windows Server is as easy as:

Download the xHyper-V module to the server that you are testing with

unzip the package to C:\Program Files\WindowsPowerShell\Modules

It should create a folder ‘xHyper-V’ under which you will find the necessary parts

“ C:\Program Files\WindowsPowerShell\Modules\xHyper-V”

including a DSCResources folder containing the classes of this module

and the xHyper-V.psd1 file.

But that is all you need to do to add this, nothing more. You are done, simply begin using it. If you want to see that your server has ‘found’ the module; open your PowerShell command window and type Get-DscResource – you will see a list of all of the providers and modules.

Back to the xHyper-V module.. It includes resources for handling VMSwitches, VHDs, VMs, file directories, and VHD files.

If you think the module grouping is strange for the features, it just might be; a Resource Provider is designed to act upon a particular object beginning with ‘Present” and “Absent” and then most allow applying other properties to that object to extend it. And in a configuration multiple modules can be called to mix and match and produce a complete configuration.

In the previous article I only used one module – Server Manager. In this example I will use two. And in the next article I will use three and add more dependencies.

So, how do you use the VMSwitch class from the xHyper-V module? Just like before but with a little addition. Since VMSwitch is not an in-box resource provider, you actually have to tell the configuration to load it.

Import-DscResource brings in the custom module named ‘xHyper-V’ to allow it to be used. Following that is a reference to xVMSwitch which is a member of that module. And ExternalSwitch is just my arbitrary name for this section.

One tricky thing that I am doing here – that I just decided to try on my own – is declaring that the first Network Adapter would be used for the switch. That is what (Get-NetAdapter)[0].Name does for me on the fly.

Since this is a required field for the resource provider and for the creation of an External Virtual Switch, I have to know what it is, and I don’t want to be bothered by testing it – my test machine has one NIC, I just want it to happen.

You could default this to the second NIC (Get-NetAdapter)[1].Name and remove the AllowManagementOS setting.

Now, you have a Hyper-V Server, with the PowerShell module, and an External Virtual switch using your first NIC that shares the management OS and is called ‘VMs’

Next time, another dependency, and my community provider – You can configure the default paths now…

Thursday, March 13, 2014

I am not sure how many posts I am going to write about desired state configuration. But lets begin with some basics with Hyper-V. Two posts from this, I will mention the DSC module that I have built for VMHost.

If you have been following – Desired State Configuration is a new feature that comes from the PowerShell team. It is a core feature that does what folks have been doing using agents for years.

You can do a number of nifty things from installation to configuration, putting files in a specific place, and more.

I really did not get excited about it until I wrote a Resource Provider, for my favorite Server Role – Hyper-V.

What looks like a function named “Sample_VMHost” is a configuration . Executing the command that calls this configuration generates a MOF format output file in current path. And Start-DscConfiguration applies that configuration.

The –Wait and –Verbose let you watch the output of the Resource Providers under this all that are turning that configuration into reality.

After you reboot, you have the Hyper-V Role installed. But only the Hyper-V Role. That is not enough, lets add the Hyper-V PowerShell Module too.

Apply this and you will notice that it tests for the Hyper-V Role first, then adds the PowerShell provider.

That is what the DependsOn does. This feature, depends on that that other feature being enabled first. Otherwise, they all get applied at the same time. And notice how these are enforced as single things.

What this does is it forces the use of the function to always provide either $true of $false. And thus force the later behavior. However, what if you have a valid third state for your parameter – empty.

This means that in my case – not passing a string actually sets that parameter to $false within the function / script. In my previous post I used the natural language way of testing for a string being empty before attempting to process it by using ($VirtualMachineMigration) – does this parameter contain anything or (!$VirtualMachineMigration) – does this parameter contain nothing.

Well, I had the first one close to right; ($VirtualMachineMigration) is more properly stated ‘does the ‘string’ parameter contain a value’ and (!$VirtualMachineMigration) is more properly stated ‘is the value equal to $false’, since the handling behavior of setting an empty boolean equal to false does not allow the question ‘is the string empty’.

So. if you always type your boolean parameter as a boolean, it will always be $true or $false. And if it is allowed to be empty (not made mandatory) then it defaults to $false. Therefore it can never be empty. It always has a count or length property.

I tried to work around this by keeping the type set to [System.Boolean] and using the default value declaration - ‘if nothing is passed default the value to x’

I first tried $null, since I was already in that way of thinking. [System.Boolean]$VirtualMachineMigration = $null And that fails with the error: Cannot convert value "" to type "System.Boolean". Boolean parameters accept only Boolean values and numbers, such as $True, $False, 1 or 0.

Okay, so lets be sneaky and try ‘2’. That processes without error. But what is the result? It is $true!

So, how do you get around this if you need the possibility of your boolean parameter to be empty.

Guess what, don’t type it as a boolean, deal with it as a string. This way you can test for $true or $false (or 1 or 0) and set a default value that is none of those to evaluate against.

If I do that, I get my additional option of leaving the parameter empty, I lose the option of forcing the caller of the script to either $true or $false, and I have to change the way I test the value.

If I stick with ($VirtualMachineMigration) as my test, and I use a numeric value as my string that is equal or greater than 1 – the $true condition is evaluated and the loop is entered. So, leaving it that way forces me to use an alpha string as my default value.

But I want to use ‘any’ value as my default other than ‘0’ or ‘1’ so I changed my evaluation to literally test for $true or $false as the value: ($VirtualMachineMigration -eq $true) which satisfies that need.

In the end I have my three conditions of $true, $false, and empty, my script properly processes the $true and $false, but I did lose the forcing of the value type. I will just have to document that and accept it – or add a value test using ValidateSet.

This allows me to enforce the use of ‘true’ or ‘false’ or empty or 0 or 1.

This prevents the consumer from errant input

PS C:\> EnableVMMigration GeorgeEnableVMMigration : Cannot validate argument on parameter 'VirtualMachineMigration'. The argument "George" does not belong to the set "True,False,,0,1" specified by the ValidateSet attribute. Supply an argument that is in the set and then try the command again.

and supports all of my requirements. If the caller uses $true it validates to ‘1’, $false to ‘0’

I still need to have a default value to handle the case of the empty parameter (don’t want PowerShell setting $false by default after all).

It never even entered my if statement. Because the If is $false, because I am sending it one – although the condition of the evaluation of ‘is $VirtualMachineMigration’ empty is actually no different than it was when it was $true.

So, what is happening? PowerShell is simply evaluating the parameter itself, not the question of ‘is this empty’.

Tricky little shell.

So, what if I really want to check if this is empty and the possibility is either $true or $false? (this is my first idea to work around this, I am sure that there are others)

So, lets update the function so that I truly see if the parameter is empty before deciding how to act on whether it is $true or $false

Subscribe

Me.

I have been a full-time IT professional since 1996. Keeping networks, telecom, systems, and core business applications alive and well. In 2006 I joined the ranks of the software vendor. Now, I figure out how to improve and advance these same systems, in the process I learn a lot about them.
MVP, CCEA, VCP, MCTS, N+