I had a discussion recently about the same topic with one of my colleague at work, I thought this is the right time to talk about NIC teaming options in the Host and Switch(es) with LACP and Switch Independent mode. The NIC teaming in general is a confusing topic for some people.

In today’s blog post, I will deep dive into Microsoft NIC Teaming options starting from Windows Server 2012, 2012 R2 and what’s coming in Windows Server 2016, I always hear people saying that Microsoft recommends Switch Independent / Dynamic mode in all cases, and why people are still using LACP, the answer is they are cases where one is a little better than the other and has more options that the other don’t, I will address that by the end of this post.

So without further ado, let’s start from the basics and then move into the advanced topics.

What is NIC Teaming?

NIC Teaming is also referred as NIC Bonding called by some people, Load Balancing and Failover (LBFO).

In short, the combining of two or more network adapters so that the software above the team perceives them as a single adapter as one pipe that incorporates failure protection and bandwidth aggregation.

NIC teaming solutions historically have also provide per-VLAN interfaces for VLAN traffic segregation, and Microsoft teaming solution of course does the same thing, I will get to this shortly.

Why use Microsoft’s NIC Teaming?

Vendor agnostic – anyone’s NICs can be added to the team.

Fully integrated with Windows Server 2012 / 2012 R2 / 2016.

Let’s you configure your teams to meet your needs.

Server Manager-style UI that manages multiple servers at a time.

Microsoft fully supported! so no more calls to NIC vendors for teaming support or getting told to turn off teaming.

Many vendors on the market has dropped down the teaming business.

Team management is easy using PowerShell, System Center Virtual Machine Manager and Server Manager.

All of the traffic for the host arrives on one NIC, this is not very useful in a Hyper-V case, but quite useful in a native teaming case, because in a native teaming case you generally have only one MAC address visible to the network from the tNIC anyway.

Hyper-V Port

Hashes on the port number of the Hyper-V switch that the traffic is coming from (All traffic from a given VM to a given team member only, and of course when you have too many VMs, then multiple VMs will be mapped to each team member).

Recommended to use with Hyper-V 2012.

Dynamic

Recommended to use with Hyper-V 2012 R2.

Dynamic distribution is Address Hash on the outbound side, and Hyper-V Port on the inbound side (are you confused yet? probably ).

What the means is that the outgoing traffic will be spread across the team members on per flow basis, and then watch the ARP and manage the ARPs coming from the VMs (ARP responses) in a way that ensures that each of the VM has their incoming traffic mapped to various different team members, so if you have a lot of team members for example you have a team of 8X1Gig NICs, this means that will take the VMs and distribute them across all incoming team members, although each VM will be mapped to exactly one NIC per incoming purposes, that means that a given VM traffic cannot exceed the bandwidth of a single team member. However on the outbound side, the distribution is actually on per flow basis, so a given VM can generate more than 1 team member worth of traffic, and will break down into flows and distribute them across the team members.

And because Dynamic is based on Flowlet technology, Microsoft keep checking gaps in the flows, and after each gap has occurred in the flow, they look whether the flow should continue on the same NIC or whether there is a less used NIC that can move that flow to, in order to balance that outbound traffic across all of the NICs.

Teaming Modes and Load Distribution Methods (Summary)

Active/Standby

A frequently used mode with NO real value.

Available only in Switch Independent / Address Hash operation.

Only one team member can be set to standby.

I like to give the analogy of building a 4 lanes highway, that’s a free way with two lanes in each direction, and then taking one lane in each direction out of service till the other lane is broken.

You already have the infrastructure investment made that your company paid for, you have already got all of the connections and everything in place, and you are not using half of it because you want to be there in case you need it when the other one brakes.

It makes a lot better sense to use Active/Active, such that you are always using all of the infrastructure that you already bought paid for.

Windows Server 2012 Switch / Load Interactions

Windows Server 2012 R2 Switch / Load Interactions

Team Interfaces (tNICs)

Team interfaces can be in one of two modes:

Default mode: passes all traffic that doesn’t match any another team interface’s VLAN id.

VLAN mode: passes all traffic that matches the VLAN.

Inbound traffic is always passed to at most one team interface only.

The Hyper-V team have said loud and clear, if you are using Hyper-V Virtual Switch on top of a team, the team must only exposed the default interface (interface a default mode) and no others. The Hyper-V virtual switch manage all of the VLANs configuration, it’s perfectly capable of that.

Team interfaces created after initial team creation must be VLAN mode team interfaces.

Team interfaces created after initial team creation can be deleted at any time (using server manager UI or PowerShell). The primary interface cannot be deleted except by deleting the team.

It is a violation of Hyper-V rules to have more than one team interface on a team that is bound to the Hyper-V Switch.

A team with only one member (one pNIC) may be created for the purpose of disambiguating VLANs.

A team of one has no protection against failure (of course ).

Frequently Asked Questions

Any physical Ethernet adapter can be a team member and will work as long as the NIC meets the Windows Logo requirements.

Teaming of RDMA adapters is not supported in Windows Server 2012 and 2012 R2, but supported in Windows Server 2016 (I’ll get to this shortly).

Teaming of WiFi, WWAN, etc, adapters is not supported.

Teams of teams are not supported as well.

Teaming in a Virtual Machine is supported, but limited to:

Switch Independent, Address Hash mode only.

Teams of two team members are supported.

Intended/optimized to support teaming of SR-IOV Virtual Functions (VFs) but may be used with any interfaces in the VM.

Requires configuration of the Hyper-V Virtual or failovers may cause loss of connectivity.

Maximum number of NICs in a team: 32

Maximum number of team interfaces: 32

Maximum teams in a server: 32

Not all maximums may be available at the same time due to other systems constraints.

Detect cable faults, NIC faults, adjacent switch power issues, etc. but doesn’t detect dead switch port logic (This is extremely rare failure, this is the case where a switch is still electrically alive, but the logic on the port has hung or failed. The switch independent is looking at the electric interface and won’t detect that the switch has quick passing traffic on a given port).

Why LACP

Because it maintains heartbeat between the switch port logic and the host, this heartbeat allows to detect that switch port logic errors or failure, because if the switch port logic goes down, it will not send any heartbeat, it does not process the heartbeat, it does not send back the response, and the result is that NIC teaming on the host will detect that the switch port is not alive anymore, then they will take that particular link out of the LAG for the duration time the switch port is not responding. .

Allows switch to load balance ingress flows across the team members.

Integrate well with Equal-cost multi-path (ECMP) through Multi-chassis switches.

Does not work with Windows Server 2016 (RDMA) teaming, because stateful offloads like RDMA requires all the traffic for that engine to arrive on a given NIC, so LACP won’t work.

Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.