There has been a lot of information floating around about converged fabric with Hyper-V in Widnows Server 2012 and a lot of information about how to configure a Converged Fabric using PowerShell. The purpose of this blogpost is to explain in detail how to deploy Converged Fabric using System Center 2012 – Virtual Machine Manager 2012 SP1. The reason there are so many examples explaining how to deploy a Converged Fabric using PowerShell is because a lot of the things you need to configure are not available in Hyper-V Manager, Failover Cluster Manager or any other GUI tool built into Widnows Server 2012. You can however, deploy a converged fabric using SCVMM 2012 SP1 once you have all of your fabric components put into place.

So What is Converged Fabric?

Converged Fabric is not a feature that can simply be enabled in Windows Server 2012, it is rather the implementation of a number of features which are built into Windows Server 2012 which make Converged Fabric possible:

NIC Teaming (load balancing and failover – LBFO)

Hyper-V Extensible Switch

Management OS Virtual NICs

VLAN Support

Hyper-V QoS (there are multiple methods to implement QoS, this blogpost will focus on Hyper-V QoS)

A typical Hyper-V host network configuration pre Windows Server 2012 often dedicated physical network interfaces per host workload (1 NIC for Management, 1 NIC for Cluster/HB, 1 NIC for Live Migration, 1 NIC for Hyper-V Switch etc.). This required a lot of often underutilized physical NICs and a number of port configurations per Hyper-V host on the physical switch.

Legacy fabric example:

With a converged fabric option for the same workload requirements in Server 2012 you could:

Create a team comprising your physical links

Create a Hyper-V Extensible switch utilizing that team

Create Virtual Network Adapters for the Management OS to utilize plugged directly into the Virtual Switch

VLAN tag the Management OS Virtual Network Adapters

Apply QoS policies to ensure a minimum bandwidth requirement for specific virtual adapters should network congestion occur on a single physical network link.

Firsts things first, we need to deploy all of the required fabric components in VMM to support Converged Fabric. To summarize, we need to configure the following:

Logical Network

Logical Network Definition (Site)

Native Uplink Port Profile

Native Virtual Adapter Port Profiles (VMM has some built in we can use)

Logical Switch

VM Networks

So lets dig in, starting with the Logical Network

Logical Network and Associated Logical Network Definition

Logical Network’s are a container for Logical Network Definitions which contain the associated VLANs / Subnets for a specific location. – That’s a lot of containing !

Navigate to the “Fabric” pane in VMM > and select “Logical Networks” under “Networking” > click “Create Logical Network” in the action pane.

Give your Logical Network a useful name and check the box for “Network sites within this logical network are not connected” – This will enable us to use VLAN isolation.

On the next page you need to create your Logical Network Definition (Network Site) and associate that with your host group. In my example I am creating a Logical Network Definition for my Engineering location which utilizes the following VLANs and network subnets:

Note: Although I am specifying that my Hyper-V Host Network is 0 the actual VLAN on the switch side is 31, 31 is simply configured as the native VLAN on the switchport trunk for these Hypervisors. Below is an example switchport configuration for my Hyper-V Hosts (Cisco Nexus 5548UP):

Note: You will notice that on the “Network configuration” page of the “Create Native Port Profile” wizard you see a box you can check for “Enable Windows Network Virtualization”. What this does is enable the “Windows Network Virtualization Filter driver” on the physical links for the purposes of enabling NVGRE. For the purposes of this post we will not be digging into NVGRE but it should be noted that this is where you can globally enable the filter so you can use that feature in Hyper-V.

Native Virtual Adapter Port Profiles

When we go to deploy this configuration to a Hyper-V host we will actually assign Virtual Network Adapters to the host (just like we do with a VM). With the use of Native Virtual Adapter Port Profiles we can actually define the Offload Settings (adapter offloads), Security Settings (things like DHCP guard) and Bandwidth Settings (QoS). Fortunately for the purposes of the Virtual Adapter workloads we plan on deploying (Host Management, Live migration and Cluster), VMM already has some good examples which shipped with VMM.

We are going to use these built in ones so go ahead and review them, pay particular attention to the “Bandwidth Settings” tab and more specifically the “Minimum bandwidth weight”.

Logical Switch

Instead of going Hyper-V host to Hyper-V host and manually creating your virtual switches this is where we can build a single switch which we then deploy to our Hyper-V hosts. This Logical Switch will contain the Uplink Port Profile which will determine the teaming settings for the physical adapters on the Hyper-V host as well as the Virtual Adapter Profiles available on the switch to be used for both ManagementOS workloads (Host Management, Live Migration, Cluster etc.) and Virtual Machine workloads (High bandwidth, medium bandidth etc.).

Navigate to the “Fabric” pane in VMM > and select “Logical Switches” under “Networking” > Click “Create Logical Switch” in the action pane.

On the “General” tab give your logical switch a useful name

On the “Extensions” tab leave this as default as we are not messing with additional extensions as a part of this walk through

On the “Uplink” tab specify your “Uplink Mode” to be “Team” > Click “Add” under “Uplink port profiles” and select the uplink port profile you created in the previous step.

On the “Virtual Port” tab click add and select the “Port classification” we want to add > click the box for “Include a virtual network adapter port profile in this virtual port” > select the appropriate port profile for the classification. Repeat this step until you have all of the required classifications for your converged fabric design

Click Next > click finish

VM Networks

We now need to create the VM networks which basically creates an object we use to plug a virtual network adapter (assigned to the host or a VM) into a specific network, VLANs in our example.

Navigate to the “VMs and Services” pane in VMM click on “VM Networks” and click “Create VM Network”

On the “Name” tab give your VM network a useful name and select the “Logical network” you created earlier

Now that we have all of that built in VMM we can finally deploy our converged fabric to our Hyper-V host(s) via the Logical Switch. To get ready for this have a Hyper-V host added to VMM which does not yet have a virtual switch. Be sure this Hyper-V host is added to the Host Group which your Logical Network Definition (Site) is assigned to.

Navigate to the “Fabric” pane in VMM > expand your host group > right click on your Hyper-V host and click “Properties

Select your “Logical switch” and click “Add” under “Physical adapters” for the number of adapters you wish yo be a part of this team > also select your “Uplink Port Profile” – In my case I have 2 x 10G adapters

Select “New Virtual Network Adapter” > give it a name > select the box for “This virtual network adapter inherits settings from the physical management adapter” for the first one which will be your Management adapter > select the appropriate “VM Network” and “Port profile classification” > Repeat this step for each of your adapters (Management, LiveMigration and Cluster in my example)

Note: Only select the box for “This virtual network adapter inherits settings from the physical management adapter” for the “Management” virtual network adapter. This will move the IP settings from the current management adapter to this virtual adapter insuring VMM can continue to connect to this host during the deployment of the logical switch. I have found that without this it will likely pick up a new IP from DHCP and without an administrator flushing the DNS on the VMM server during the deployment of the Logical Switch your job will not complete and you will have half of your converged fabric deployed.

Select OK > and click OK to continue after reading the warning

Go to the Jobs in VMM and be sure this Logical Switch is applied successfully

What about QoS?

Remember all of those Native Virtual Adapter Port Profiles VMM shipped with which we used when adding virtual adapters to the ManagementOS during the deployment of the logical switch? Each of those adapters has a minimum bandwidth weight assigned to them:

Host management = 10

Live migration = 40

Cluster = 10

If you run the following command on your Hyper-V host you will see how this can be calculated as a percentage of bandwidth:

Weighted QoS policies are nice because they will not kick in unless traffic contention actually occurs over one of the physical links. This means Live Migration can consume 100% of a physical link until one of the other adapters is contending for traffic, at which point QoS with throttle the traffic.

Limitations

Currently VMM is not capable of setting Jumbo Frames on either the Physical NICs or specific ManagementOS vNICs. In my case I would want each physical link to have an MTU of 9014 and I would also want Cluster and LiveMigration vNICs to have an MTU of 9014. I currently have a PowerShell script I run post deployment to handle this but I hope future releases of the product will resolve this so you manage your entire fabric from one pane of glass.

Summary

We have now deployed the example converged fabric we talked about at the beginning of this post. I hope you find this useful!