I would just like to say, that this is my own best practice – make sure you do some research elsewhere before you take this as gospel 🙂 .

After recently setting up my only second new Cisco UCS, I thought I would share some of my experiences with the networking side of the Cisco UCS. In my last deployment of a Cisco UCS I only configured a single Virtual Machine network template to use a single Fabric uplink to the Core Switch with Failover enabled. This works still really well to this day, but the biggest issue for me is that the traffic between Fabric A and Fabric B is now not distributed evenly. Ideally, the Virtual Machine traffic should be able to traverse either Fabric A or Fabric B, and I will admit that the design of my first Cisco UCS deployment was by no means my finest piece of work.

This time around I had the time to properly scope out the design with some vigorous white-boarding sessions, which made a nice change as I could completely map out the UCS network and fibre design. So the uplink from the UCS to the Core Network is to a pair of stacked Cisco 3750X’s which logically looks like this:

So from each Fabric Interconnect on the UCS Mini, we have an uplink to each Switch. This not only provides more bandwidth for the Virtual Machines, but ensures that we don’t have a single point of failure. If a FI or a Switch fails then we still have connectivity to the corporate network. The next piece was how could we setup the networking to allow Virtual Machines to communicate via both FI’s? The solution itself is simple, but means that there is a gotcha that you must make your virtualisation administrators aware of (which I will explain later).

The solution is to configure two vNIC templates, one vNIC template will use Fabric A with failover enabled and the secondary vNIC template will use Fabric B with failover enabled:

So we then present two NIC’s to Hyper-V, (which I have renamed already) and we basically then create two virtual switches on each host:

The task for us then (or the virtualisation administrator 😉 ), as we started to migrate Virtual Machines over to the new Hyper-V 2016 cluster was to re-balance all Virtual Machines between Switch-A and Switch-B. We used the methodology that we should put all odd numbered Virtual Machines on Switch-A and all even number Virtual Machines on Switch-B. That way roughly half of all VMs are going out of Fabric-A (which is through Switch 1), and the other half are going out of Fabric-B (which is through Switch 2).

What about converged networking?

Well, I had thought about using converged networking but after reading several forums I decided to let the Cisco UCS do all the hard work. I have on several occasions used the converged networking functionality on rack servers, but for a blade environment like this it is definitely best to let the Cisco UCS handle FI failure and let it switch over between working FI’s. I did attempt to do this during my first deployment, but for some reason I could not ever get it to work. So in this method I presented what was a single NIC to the Hyper-V host and then using some Powershell scripts I logically split this out into several other NICs which actually uses the Hyper-V networking stack. But doing this on a Cisco UCS deployment does not seem to work, as having both links active at the same time seem to cause a lot of networking problems.