When following the Cisco Cheat sheet for deploying the Nexus 1000v for deploying the Nexus 1000v in L3 mode, make sure one of the first commands you run (or when applying your config) is ‘feature LACP’ without it you cannot activate your LACP port channel.

I set up something similar to the above, the port-channel is between the server ports and the 5k (2ks acting as fabric extenders) and setup to load balance using LACP. This is design was only recently supported by Cisco as previously you would have had to load balance traffic on the port channel via MAC-Pinning.

This annoying little oversight caused me an hour of grief on a customer site as the port-channel on the 5k and 1k would not become active until this feature was enabled.

I recently came across a customer that had limited space, power and cooling in there datacentre (very few who don’t right?) but wanted to put in a Vblock but to do so would need to split most of the UCS chassis across multiple racks and at opposing ends of the datahall. Traditionally, when I design and spec a UCS system I use the default ‘passive‘ Copper Twinax SFP+. In the event I need to provide cabling for Fabric Interconnects that are more that 5 meters away from the chassis then I would use ‘active‘ Copper Twinax SFP+ as these can go up to 10 meters.

But in this case the distances are over 30 meters. The alternative is go optical by using SFP+ modules (SFP-H10GB-SR) which can more than compensate for almost any datacentre distances (300m or so).

A few of you may have noticed I said this was a Vblock, you may be thinking you will not be aloud to do this with a Vblock as it brakes the default spec. While it does go against what’s in the design for a Vblock, it is a great example of how the Vblock products are actually flexible and not as rigid a people may think and exceptions can be raised when genuine requirements demand it.

Click here for more info on installing and configuring a UCS chassis and cabling it up.

UPDATE:

Thanks to a Andrew Sharrock (@AndrewSharrock) for pointing this one out. As of UCS Software release 1.4 Fabric Extender Transceivers have been supported and are an alternative to using the above. You can get up to 100m from a FET and it supports OM2, 3 and 4 cables. I have a feeling not many people have deployed this as its Google doesn’t bring many results back on this subject but its an option. I’m not sure if VCE support it within a Vblock either (VCE peeps are welcome to confirm or deny this in the comments).

From Cisco:

To replace a copper Twinax SFP+ transceiver with an optical SFP+ transceiver, follow these steps:

Step 1 Remove the copper Twinax SFP+ from the I/O module port by pulling gently on the rubber loop (see Figure 2-19). The cable and SFP+ transceiver come out as a single unit, leaving the I/O module port empty.

Step 2 Insert the optical SFP+ transceiver into the I/O module port. Make sure that it clicks firmly into place.

Using Hyper-V’s extensible switch framework the Nexus 1000v, as when Windows 8 is released, can now be used to create advanced virtual networking features within a Microsoft Hyper-V environment. VM-FEX (Hypervisor Bypass) will also available, its worth noting you have to be running Hyper-V on Cisco UCS utilising the VIC converged network card to get VM-FEX.

Once Cisco get the Tidal and newScale software in line in such a way they can be seen as one cloud solution I think they will have a very credible competitor to the other cloud software on there hands. And along with Cisco’s UCS it gives a very attractive package to customer wanting an comprehensive cloud solution.