The official (not working 🙁 ) way:To replicate VMware VMs to Azure you have to install the ASR Mobility Service in the VM. But what is when in the VM is running a Client OS (Windows 7, 8.1, 10) instead of Windows Server? Officially this is not supported by Azure Site Recovery and when you try to install the Mobility Service you get the following nice, or not so nice 😉 , message:

The unofficial but working way:
However, beside the fact that a single VM in Azure does not qualify for a SLA guarantee and may have down times, there is technically actually no reason why you cannot run a Client OS in an Azure VM. Especially if the VMs are used for Dev/Test scenarios. So why should it then not be possible to replicate or migrate these VMs to Azure with ASR, you may ask? And you know what? With a little trick (installing the MSI directly on command line) it’s actually really possible. Here are the steps needed to get the Mobility Service running on a Client OS:

Get the Mobility Service .exe file from your ASR Process Server and copy it to temporary location on the VM which you want to replicate to Azure. You can find the setup file in the install folder of the Process Server under home\svsystems\pushinstallsvc\repository (e.g. D:\Program Files (x86)\Microsoft Azure Site Recovery\home\svsystems\pushinstallsvc\repository\Microsoft-ASR_UA_9.0.0.0_Windows_GA_31Dec2015_Release.exe)

Run the exe and make notice of the folder to where the files get extracted by the installer

Keep the Setup Wizard open and copy the content of the folder from step 2 to a temporary location

Now you can install the Mobility Service MSI directly with msiexec by executing the following command line.

Last week Microsoft released a first preview of the Microsoft Azure Stack. The software stack which allows you to run Azure in your own datacenter.

Official a physical server with quite a lot of CPU cores and memory is required to deploy the Azure Stack Technical Preview. Because I do not have any spare servers in my home lab to use exclusively for the Azure Stack Technical Preview I looked for an alternative and I tried to deploy it in a VM. And here is a short walkthrough how you do it and yes it actually works.

Now, finally, you can run the PowerShell deployment script (Deploy Azure Stack.ps1) as it is described in the original documentation from Microsoft. The script will take several hours to finish. So better get you a cup of coffee or have a “little” break and hope everything goes well. If it does, you will get a functional Azure Stack installation in a VM.

Update 09.03.2016:Although the setup just works fine in the VM and you can even provisioning Subscriptions and Tenant VMs there are some serious issues with networking when using this nested setup. As soon as you connect to a fabric VM (with RDP or VM Console) the VM with the virtual Hyper-V Host will crash.
Many thanks to Alain Vetier for pointing this out and sharing his finding here!
See also his comments below.

Lately I had to rebuild the Hyper-V Hosts in my home lab several times because of the release of the different TPs for Windows Server 2016. This circumstance (and because I am a great fan of PowerShell DSC ) gave me the Idea to do the whole base configuration of the Hyper-V Host, including the LBFO NIC Teaming, vSwitch and vNICs for the converged networking configuration, through PowerShell DSC.
But soon I realized that the DSC Resource Kit from Microsoft provides DSC resources only for a subset of the needed configuration steps. The result was some PowerShell modules with my custom DSC resources.

My custom DSC resources for the approach:

– cLBFOTeam: To create and configure the Windows built-in NIC Teaming
– cVNIC: To create and configuring virutal network adapters for Hyper-V host management
– cPowerPlan: To set a desired Power Plan in Windows (e.g. High Performance Plan)

You can get the moduels from the PowerShell Gallery (Install-Module) or from GitHub. They will hopefully help everyone who has a similar intend

More to come:Yes, I am not quite finished yet and I have more in the pipline.
Currently I am also working on a fork of the xHyperV Module with a adopteted xVMSwitch resource with a paramter to specify the MinimumBandwidth mode of the Switch.
Futuremore I am also planing to add support for the SET (Swicht Embedded Teaming) in Windows Server 2016 to the xVMSwitch resource.

So you may soon read more about this topic here. In the meantime, happy DSCing!

As Aidan Finn (and probably many others) wrote on his blog Microsoft has published a new Version of the Azure Backup Software. The new Software has now the ability to Backup workloads such as Hyper-V VM, SQL-Server, SharePoint and Exchange on premise to disk (B2D) and backup to the Cloud for long term retention. All in all, it sounds very similar to a DPM installation with the Azure Backup Agent. So it seems that DPM has a reborn, apart from the System Center Suite, as Azure Backup. So I decided to do a test installation and here is a how it looks like:

Firs you need an Azure Subscription with Backup Vault. For my Test I create a new Vault:

Once the Backup Vault is created you can Download the new Azure Backup Setup:

In additional to the Azure Backup setup you must also download the Vault credentials which you need later in the setup:

After the Download you need to extract the files and then start setup.exe. And then the Setup Wizard start. If you are familiar with DPM you will notice the remarkable resemblance. Note the Link for DPM Protection Agent, DPM Remote Administration on the first Screen 😉

Finally, after Setup you have a Server with Azure Backup. The Console looks still like a DPM clone. Expect that the ability for Backup to Tape is missing everythingÂ is very similar to the Management Console from DPM 2012 R2:

If MS will really use DPM as basis for the Azure Backup I am very curious to see how MS will tune the underlying DPM in the future to handle big data source like files servers with multiply TBs of Data which is not necessary abnormally these days. But that’s where DPM has really big drawback at the moment. We will 🙂

In the TP3 the installation option was changed. Therefore, when you create a VHD(X) directly from the ISO with the Convert-WindowsImage.ps1 Script you have choice to create a VHD with Core Server or the full GUI with Desktop Experience but nothing in between. To create a VHD with the minimal server interface (core server with Server Manger and mmc GUIs) or the Server Graphical Shell (without Desktop Experience) you have to add the corresponding features with DISM.

This is how you add the minimal server interface to a VHD with the core server installation:

Update, 11/20/2015:
This does not work any more with the TP4 which is now public available as the feature “Server-Gui-Mgmt-Infra” is gone now. You can add the feature “Server-Gui-Mgmt” with DISM which gives you a similar experience. But the feature is not even listed in PowerShell (Get-WindowsFeature) so I think this is probably far form supported.
With other words: No “Minimal Server Interface” in TP4 anymore.

By default, DPM will create for every data source two volumes (a replica andÂ a shadow copy volume). For Hyper-V and SQL Database DPM can colocation multiple data sources on a single replica an shadow copy volume. This is relatively well known setting. The option is especially useful for backup a large numbers of Hyper-V VMs.

What is less know, is the possibility to tune the initial size of the replica volume which DPM will choose when a new Protection Group with colocation is created. Continue reading →

The official Windows 10 Preview ISO from Microsoft installs only the Pro or Core Version. So it can not be used to install or upgrade the Enterprise Edition. However the Sources can be easily “upgraded” to the Enterprise Edition using DISM on a existing Windows 10 Installation:

Change the edition of the install.wim file with the Set-WindowsEdition command:

1

Set-WindowsEdition-PathC:\temp\mount\-Edition Enterprise

After that dismount the Image:

1

Dismount-WindowsImage-PathC:\temp\mount\-Save

Now you can run directly the setup.exe in the C:\temp\W10_10162 to do an inplace upgrade of a older Windows 10 Enterprise Build. If you prefer a clean Installation, copy the files to a bootable USB Stick and reboot.

In a network with Hyper-V Network Virtualization (using NVGRE encapsulation) the MTU (Maximum Transmission Unit) size is 42 Bytes smaller than in a traditional Ethernet network (where it is 1500 Bytes). The reason for this is the NVGRE encapsulation which needs the 42 Bytes to store his additional GRE Header in the packet. So the maximum MTU size with Hyper-V Network Virtualization is 1458 Bytes.

The problem with Linux: VMs:
For VMs running Windows Server 2008 or newer this should not be a Problem because Hyper-V has a mechanism which lowers the MTU size for the NIC of the VM automatically if needed. (Documented on the TechNet Wiki).
But with VMs running Linux you could run in a problem because the automatically MTU size reduction seem to not function correctly with Linux VMs:https://support.microsoft.com/en-us/kb/3021753/
This has the effect that the MTU size in the Linux VMs stays at 1500 and therefore you can experience some very weird connection issues.

The Solution:
So there are two options to resolve this issue:

Set the MTU size for the virtual NICs of all Linux VMs manually to 1458 Bytes

Enable Jumbo Frames on the physical NICs on the Hyper-V Hosts. Then the there is no need to lower the MTU size in the VMs.

(wait for kernel updates for your Linux distribution which has the fix from KB3021753 implemented)

This works basically very well for all user object where the path for the Terminal Services Profile is set or was set sometime in the past and is now empty. But if you have a user object for which the Terminal Services settings in AD were never touched you get a funky error message:Exception calling “InvokeGet” with “1” argument(s): “The directory property cannot be found in the cache.

If you do an ad hoc query then this is not really a problem. But if you want to export the settings for all ad users into a CSV file the error will probably bother you.
So what we can do? If you have a look at the properties of the ADUser object, which the Get-ADUser Cmdlet returns, you can see that there is a property with the name â€œuserPropertiesâ€ with a cryptic value. Thatâ€™s where the Terminal Services Profile Path is actually stored.

But it the User Object had never set a Terminal Service Profile Path this property does simply not exist:

Now, as workaround, you can first check for the existence of â€œuserPropertiesâ€ property before you query the Terminal Services Profile Path with ADSI. This could look like this: