A long time ago I wrote a blog post on how you can use System Center Virtual Machine Manager Bare-Metal Deployment to deploy new Hyper-V hosts. Normally this works fine but if you have newer hardware, your Windows Server Image does may not include the network adapter drivers. Now this isn’t a huge problem since you can mount and insert the drivers in the VHD or VHDX file for the Windows Server Hyper-V image. But if you forget to update the WinPE file from Virtual Machine Manager your deployment will fails, since the WinPE image has not network drivers included it won’t able to connect to the VMM Library or any other server.

You will end up in the following error and your deployment will timeout on the following screen:

“Synchronizing Time with Server”

If you check the IP configuration with ipconfig you will see that there are no network adapters available. This means you have to update your SCVMM WinPE image.

First of all you have to copy the SCVMM WinPE image. You can find this wim file on your WDS (Windows Deployment) PXE Server in the following location E:\RemoteInstall\DCMgr\Boot\WIndows\Images (Probably your setup has another drive letter.

I copied this file to the C:\temp folder on my System Center Virtual Machine Manager server. I also copied the extracted drivers to the C:\Drivers folder.

After you have done this, you can use Greg Casanza’s (Microsoft) SCVMM Windows PE driver injection script, which will add the drivers to the WinPE Image (Boot.wim) and will publish this new boot.wim to all your WDS servers. I also rewrote the script I got from using drivers in the VMM Library to use drivers from a folder.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

$mountdir="c:\mount"

$winpeimage="c:\temp\boot.wim"

$winpeimagetemp=$winpeimage+".tmp"

$path="C:\Drivers"

mkdir"c:\mount"

copy$winpeimage$winpeimagetemp

dism/mount-wim/wimfile:$winpeimagetemp/index:1/mountdir:$mountdir

dism/image:$mountdir/add-driver/driver:$path

Dism/Unmount-Wim/MountDir:$mountdir/Commit

publish-scwindowspe-path$winpeimagetemp

del$winpeimagetemp

This will add the drivers to the Boot.wim file and publish it to the WDS servers.

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

If you are working with System Center Virtual Machine Manager and you want to export and import your existing VM or Service Templates. I have a customer scenario where we have two VMM installations. They are using System Center Virtual Machine Manager, Orchestrator, Serivce Manager to deploy new customer environments for their premium SaaS (Software as a Service) hosting solution where they deploy Lync, Exchange and SharePoint fully automated. Here we have a development environment where they test new System Center Orchestrator Runbooks and new Templates in Virtual Machine Manager. After they have a working RunBook with working Templates they export the templates from the dev VMM and import them in the production environment.
Because I was surprise how great this works and I think not a lot of people know about this feature, I created this short step-by-step guide.

Export Templates from Virtual Machine Manager

First select the Templates you want to export and click on the Export button on the Ribbon bar. You can also do a multiple select to export multiple templates.

You can than configure the export, with a location, password.

You can also select what physical resources which should be exported with the template. For example if you are using the same VHD or VHDX for multiple templates you may want to export this resource only once to save some space.

The export will look kind of like this. The XML files are the templates with the configurations, and in the folders are the physical resources like VHDs, XMLs or other stuff.

Import Templates in Virtual Machine Manager

To import a template just select the exported XML file.

You can change or setup the resource of the template, for example you can select an already existing VHD from your Library or an already existing Run As account.

And you can set the location for the new imported resources (VHDs,…)

I hope this shows you how easy an export and import of a Service or VM Template from System Center Virtual Machine Manager is. I like especially how SCVMM handles the additional resources, so you don’t have to import the same VHD every time and you can change Run As accounts very easily.

Since System Center Virtual Machine Manager is starting to get more and more important and starts to be a critical application for your environment especially if your are using Hyper-V Network Virtualization and SCVMM is your centralized policy store, you should install Virtual Machine Manager highly available. To do this Virtual Machine Manager uses the Failover Cluster feature integrated in Windows Server.

Before you begin check this important nodes

Not only the SCVMM Management Server should be high available, also the SQL Server where the SCVMM database is installed and the file share for the library share should be highly available.

You can have two or more SCVMM Management server in a cluster, but only one node will be active.

You will need to configure Distributed Key Management. You use distributed key management to store encryption keys in Active Directory Domain Services (AD DS) instead of storing the encryption keys on the computer on which the VMM management server is installed.

Create a SCVMM Service Account which has local admin rights on the SCVMM nodes.

Create a container in Active Directory Domain Services for the Distributed Key Management.

Set all IP addresses, you may also configure an independent Heartbeat network

Install the Failover Cluster feature on both server.

After you have done this steps you start to create a Failover Cluster with both nodes.

You built the SCVMM Cluster, now you have to install the SCVMM Service on the first node. You can start the SCVMM Installer and will automatically detected the SCVMM Cluster and will ask you if you want to install the SCVMM server as high available installation.

The installation is now more less the same as for a standalone Virtual Machine Manager Server, expect you have to use Distributed Key Management and one screen where you configure the SCVMM Cluster Role with a name and an IP address.

After you have installed the first node you can now run the setup on the second node. The setup does also see the cluster and the SCVMM Cluster role, and will ask you about the configuration. Many of the settings cannot be changed because they are the same on all nodes.

After you have installed both nodes you can see the SCVMM Cluster Role in the Failover Cluster Manager.

And you can of course also see all your Virtual Machine Management servers in the Virtual Machine Manager console.

After you have installed the Cisco UCS PowerTool on your System Center Orchestrator Runbook servers you now an import the Integration Pack via the System Center Orchestrator Deployment Manager. With a right click on Integration Packs you can Register the Cisco UCS IP.

After that you also have to deploy the IP to the Orchestrator Runbook servers.

You can start to create new Orchestrator Run Books with the Runbook Designer. First open the SCO Runbook Designer and in the Options menu select Cisco UCS to added the Path to the Cisco UCS PowerTool module (PowerShell module). The default path the Cisco UCS PowerTools are installed is: “C:\Program Files (x86)\Cisco\Cisco UCS PowerTool\Modules\CiscoUcsPS\CiscoUcsPS.psd1”

You can now start to automate your Cisco UCS with System Center Orchestrator.

Btw. my next speaking event will be at the System Center Universe Europe in Bern Switzerland. Together with other Microsoft MVPs and Consultants will I do an advanced session on System Center 2012 R2 – Virtual Machine Manager Networking and an overview session on Windows Server 2012 R2 Hyper-V. So if you want to see my sessions or the other great sessions about the Microsoft Cloud offering, System Center, Windows Server and Windows Azure make sure you register for the event on systemcenteruniverse.ch.

About

My Name is Thomas Maurer. Microsoft MVP. Work as a Cloud Architect for itnetX, a consulting and engineering company located in Switzerland. I am focused on Microsoft Technologies, especially Microsoft Cloud & Datacenter solutions based Microsoft System Center, Microsoft Virtualization and Microsoft Azure.