Windows Server 2012 Hyper-V & Network Card (NIC) Teaming

Every time Microsoft gave us a new version of Hyper-V (including W2008 R2 SP1) we got more features to get the solution closer in functionality to the competition. With the current W2008 R2 SP release, I reckon that we have a solution that is superior to most vSphere deployments (think of licensed or employed features). Every objection, one after the next, was knocked down: Live Migration, CSV, Dynamic Memory, and so on. The last objection was NIC teaming … VMware had it but Microsoft didn’t have a supported solution.

True, MSFT hasn’t had NIC teaming and there’s a KB article which says they don’t support it. NIC teaming is something that the likes of HP, Dell, Intel and Broadcom provided using their software. If you had a problem, MSFT might ask you to remove it. And guess what, just about every networking issue I’ve heard on on Hyper-V was driver or NIC teaming related.

As a result, I’ve always recommended against NIC teaming using OEM software.

We want NIC teaming! That was the cry … every time, every event, every month. And the usual response from Microsoft is “we heard you but we don’t talk about futures”. Then Build came along in 2011, and they announced that NIC teaming would be included in W2012 and fully supported for Hyper-V and Failover Clustering.

NIC teaming gives us LBFO. In other words, we can aggregate the bandwidth of NICs and have automatic failover between NICs. If I had 2 * 10 GbE NICs then I could team them to have a single pipe with 20 Gbps if both NICs are working and connected. With failover we typically connect both NICs to ports on different access switches. The result is that if one switch, it’s NIC becomes disconnected, but the other one stays connected and the team stays up and running, leaving the dependent services available to the network and their clients.

Here’s a few facts about W2012 NIC teaming:

We can connect up to 32 NICs in a single team. That’s a lot of bandwidth!

NICs in a single team can be different models from the same manufacturer or even NICs from different manufacturers. Seeing as drivers can be troublesome, maybe you want to mix Intel and Broadcom NICs in a team for extreme uptime. Then a dodgy driver has a lesser chance of bringing down your services.

There are multiple teaming modes for a team: Generic/Static Teaming requires the switches to be configured for the team and isn’t dynamic. LACP is self-discovering and enables dynamic expansion and reduction of the NICs in the team. Switch independent works with just a single switch – switches have no knowledge of the team.

There are two hashing algorithms for traffic distribution in the NIC team. With Hyper-V switch mode, a VM’s traffic is limited to a single NIC. In lightly loaded hosts, this might no distribute the network load across the team. Apparently it can work well on heavily loaded hosts with VMQ enabled. Address hashing uses a hashing algorithm to spread the load across NICs. There is 4-tuple hashing (great distribution) but it doesn’t work with “hidden” protocols such as IPsec and fails back to 2-tuple hashing.

NIC teaming is easy to set up. You can use Server Manager (under Local Server) to create a team. This GUI is similar to what I’ve seen from OEMs in the past.

You can also use PowerShell cmdlets such as New-NetLbfoTeam and Set-VMNetworkAdapter.

One of the cool things about a NIC team is that, just like with OEM versions, you can create virtual networks/connections on a team. Each of those connections have have an IP stack, it’s own policies, and VLAN binding.

In the Hyper-V world, we can use NIC teams to do LBFO for important connections. We can also use it for creating converged fabrics. For example, I can take a 2U server with 2 * 10 GbE connections and use that team for all traffic. I will need some more control … but that’s another blog post.

I did this, with 4 NIC’s. Tryed every method of teaming (Swirtch independent/static and LACP). When i transftering “random files” thru SMB i get full speed (350-380MB/s). But everything thats involve hyper-v (replication/moving mv) only get max 1GBit/s (115MB/s). Also, if i make the team a hyper-v switch, with management the whole server go down to 1GBit/s (still identifies the team as 4GBit/s) Do you know if its by design?

Are you saying that even if I run LACP with Address Hash on my team, Hyper-V port is still used for Hyper-V specific things? And behave as LACP with Hyper-V port, even if it’s Address Hash that runs in front?

Just tried to run all guest vm’s have their own NIC (non teaming), and my team is no longer a switch in hyper-v. The team I’m using (4Gbit / s) now runs only replication, relocation and management. Now I 4Gbit / s thru SMB and still only 1Gbit during replication, relocation (everything related to hyper-v)

Don’t believe so. If you have SMB going natively through a correctly configured team (also look at Switch Independent/Dependent versus your number of switches) then multichannel should kick in if the recipient side is also capable.

The problem with upgrading from 2008 to 2012 with a server that already has the Intel Team already set up is that 2012 requires not only the unteaming of the NICs but the uninstallation of the Intel software. This is practically impossible to do. The closest instructions that we found were on the Intel website, and it was convoluted, confusing, and after spending 4 hours just on this, we are planning to reformat the hard drive and start from scratch.

We can’t even do a boot installation for 2012 if it senses that there is already a Windows OS on the server.

Good explanation of the teaming capability in windows 2012. What I do not understand how it works with teaming using the broadcom NIC config utility. Is it so that you configure teaming in windows server 2012 only or with the Broadcom utility as well or definitely not on both places. Do you know?

Trying to get my head round how VLANs work with converged fabric setup. I have a host with 8 physical NICs teamed to make converged fabric but how do I set up VLANs at the physical and virtual switch level?

Archives

Archives

About this Blog

This blog serves 2 purposes. Firstly, I want to share information with other IT pros about the technologies we work with and how to solve problems we often face. I've worked with technologies from the desktop to the server, Active Directory, System Center, security and virtualisation.

Secondly, I use my blog as a notebook. There's so much to learn and remember in our jobs that it's impossible to keep up. By blogging, I have a notebook that I can access from anywhere. It has saved my proverbial many times in the past.

Waiver

Anything you do to your IT infrastructure, applications, services, computer or anything else is 100% down to your own responsibility and liability. Aidan Finn bears no responsibility or liability for anything you do. Please independently confirm anything you read on this blog before doing whatever you decide to do.