2 NICs without NIC Teaming

This configuration prioritizes the separation of the host traffic from the guest traffic.

Note

In this document, the term management operating system refers to the operating system of the computer upon which you are running Hyper-V.

The following illustration depicts this configuration.

With this configuration, you can apply all QoS features, including Bandwidth Management, Classification and Tagging, and Priority-based Flow Control, as long as the feature is supported by the NIC.

The following code, which is run in the management operating system Windows PowerShell environment, provides an example of creating this configuration by using Windows PowerShell. In this code sample, the lines that begin with a number or pound sign (#) are remarks that explain the code in the subsequent line.

# Use the inbox filter for Live Migration
New-NetQosPolicy “Live Migration” –LiveMigration –MinBandwidthWeight 30 –Priority 5
# Use the inbox filter for SMB
New-NetQosPolicy “SMB” –SMB –MinBandwidthWeight 50 –Priority 3
# Create a policy for the cluster heartbeat traffic sent on port 3343
New-NetQosPolicy “Cluster”-IPDstPort 3343 –MinBandwidthWeight 10 –Priority 6
# Use the inbox filter to capture all the rest of the traffic.
New-NetQosPolicy “Management” –Default –MinBandwidthWeight 10
# Note that the management traffic is deliberately configured not to be
# tagged.

You can also enable Bandwidth Management on the Hyper-V Virtual Switch for each individual virtual network adapter. The following code presents an example of how to create this configuration.

Optionally, if a VM is trusted – such as in an enterprise environment where you can trust the Administrator of the VM - you can enable QoS classification and tagging from within the VM. The following code, which is run in the Windows PowerShell environment on a VM, presents an example of how to create this configuration.

2 NICs with NIC Teaming

This configuration prioritizes high availability for all of the workloads on a computer that is running the Hyper-V server role.

You can enable Bandwidth Management on the Hyper-V Virtual Switch for both the VMs and the workloads in the management operating system. The following example commands are run in the Windows PowerShell environment in the management operating system.

Optionally, you can classify and tag traffic from within the management operating system and, in a trusted environment, from within the VMs.

# Use the inbox filter for Live Migration
New-NetQosPolicy “Live Migration” –LiveMigration –Priority 5
# Use the inbox filter for SMB
New-NetQosPolicy “SMB” –SMB –Priority 3
# Create a policy for the cluster heartbeat traffic sent on port 3343
New-NetQosPolicy “Cluster”-IPDstPort 3343 –Priority 6
# Alternatively, if these workloads are in different IP subnets they can
# be classified based on their IP subnet address
# Assume Live Migration is on 10.1.0.0/16
New-NetQosPolicy “Live Migration” –IPDstPrefix 10.1.0.0/16 –Priority 5
# Assume SMB is on 10.2.0.0/16
New-NetQosPolicy “SMB” –IPDstPrefix 10.2.0.0/16 –Priority 3
# Assume Cluster is on 10.3.0.0/16
New-NetQosPolicy “Cluster”-IPDssPrefix 10.3.0.0/16 –Priority 6
# Note that no explicit policy is created for the Management traffic, if it
# doesn’t require to be tagged.
# Enable priority tagged traffic to go through the Hyper-V Virtual Switch
Set-VMNetworkAdapter –ManagementOS –IeeePriorityTag On
# Note that the name of the virtual network adapter in the management operating system
# is deliberately omitted in the above command so that the configuration applies
# to all virtual network adapters in the management operating system
# Also note that if workloads are tagged in the IP header like the example
# shown in “ REF _Ref328648339 \h 2 NICs without NIC Teaming,” no additional configuration is required
# to let the Hyper-V Virtual Switch pass such DSCP tagged traffic.

4 NICs in two NIC teams

This configuration provides separation of the host traffic and the guest traffic, as well as high availability for all the workloads. This configuration doubles the required number of NICs to four, but they do not need to all be 10 GbE NICs. For example, if a server has a dual port 10GbE NIC and two 1GbE LOM ports, you can team the two 10GbE NICs for the management operating system and the two 1GbE LOM ports for the VMs.

For this configuration, you can use the same Windows PowerShell commands as those provided in the section 2 NICs without NIC Teaming.

4 NICs with a standard NIC team and two RDMA NICs

This configuration emphasizes the use of RDMA. To converge other workloads such as Live Migration, Cluster and Management on the same RDMA NICs, the NICs must also support Data Center Bridging (DCB). To provide high availability to Storage, you can enable Microsoft Multipath I/O (MPIO).

With this configuration, you can apply all QoS features, including Bandwidth Management, Classification and Tagging, and PFC in the management operating system.

You can enable Bandwidth Management on the Hyper-V Virtual Switch for the VMs and enable Classification and Tagging on the VMs, if they are trusted.

Alternate configuration of 4 NICs with a standard NIC team and two RDMA NICs

MPIO provides redundancy to Storage only. If you want to provide redundancy for Management, Live Migration, and Cluster you can modify the previous configuration slightly as follows.

Because the RDMA NICs are dedicated to Storage, no Bandwidth Management is required in the management operating system. Minimum Bandwidth and Maximum Bandwidth are instead configured on the Hyper-V Virtual Switch because Management, Live Migration, and Cluster each have their own virtual network adapter that is connected to the switch.

You can enable Classification and Tagging and PFC in the management operating system. You can also enable Classification and Tagging from within the VMs, if they are trusted.