In previous UCSM versions C-series integration required both 1G and 10G uplinks. The 1G uplink was for management traffic and the 10G for data. Single wire management requires only a single 10G connection from the server to the FEX. There is an adaptor requirement for this to work though. You must have the VIC 1225 in the C-series server.

VLAN port count optimization enables mapping the state of multiple VLANs into a single internal state. When you enable the VLAN port count optimization, Cisco UCS Manager logically groups VLANs based on the port VLAN membership. This grouping increases the port VLAN count limit. VLAN port count optimization also compresses the VLAN state and reduces the CPU load on the fabric interconnect. This reduction in the CPU load enables you to deploy more VLANs over more vNICs. Optimizing VLAN port count does not change any of the existing VLAN configuration on the vNICs.

This feature is only supported on 6200 series fabric interconnects.

UCSM based FC Zoning – Direct Connect Topologies

When firmware version 1.4 was released direct attached SAN storage was supported. About a month after that Cisco added a caveat stating that direct attach storage was only supported if there was an MDS/Nexus switch connected for zoning. Now with version 2.1 Cisco added zoning configuration to UCSM so now you don’t have to have an MDS/Nexus connected to handle the zoning.

Multi-Hop FCoE

This is a feature that I am very excited about. We have had several customers request this and now it is a reality. With this enhancement you no longer have to connect traditional FC uplinks to your MDS/Nexus fabric for SAN storage. If you have a Nexus 7k/5k you can now connect the Fabric Interconnects to FCoE interfaces with 10G uplinks. You can also share 10G uplinks for both LAN/SAN reducing the port counts and license counts.

We fortunately were already using a pair of Nexus 5500s in our lab for FC switching so I was able to quickly setup FCoE using these steps:

Connected an additional 10G Ethernet connection from interface E1/20 on Fabric Interconnect A to E1/20 on the Nexus 5500 in SAN fabric A VSAN 11.

Created vfc 20

Added vfc 20 to VSAN 11

Only allowed vsan 11 in the VSAN trunk list

Bound vfc 20 to interface E1/20

Configured E1/20 in trunk mode and only allowed VLAN 811 (FCoE VLAN that is mapped to VSAN 11)

Enabled Spanning-tree port type edge trunk

No shut both Ethernet and vfc interfaces

Connected an additional 10G Ethernet connection from interface E1/20 on Fabric Interconnect B to E1/20 on the Nexus 5500 in SAN fabric B VSAN 12.

Followed the same steps as 2-8 except for VSAN 12 and VLAN 812.

One Fabric Interconnect at a time and configured E1/20 as a FCoE uplink and mapped it to the appropriate VSAN

Shutdown the FC SAN Port Channels and verified the vHBAs logged into the Nexus 5500 over vfc 20.

After configuring FCoE on the Fabric Interconnects there were two new faults complaining about VSAN 1 being down on the FCoE uplinks. I am not using VSAN 1 and the FCoE uplink vfc interfaces were not set to only trunk VSAN 11. I am guessing this is minor bug that Cisco will fix.

Unified Storage/Appliance Port

This feature is related to the Multi-Hop FCoE feature. This feature allows a single port to be both Ethernet LAN and FCoE SAN. This applies to both Appliance ports and Ethernet Uplink ports. This feature requires that the Fabric Interconnects be in FC Switch mode.

These Unified Ports are only supported on 6200 series hardware.

Multicast Policy with IGMP Snooping and Querier

We will have a few customers happy about this feature. With this feature the Fabric Interconnects can be configured as IGMP Snooping queriers to keep multicast sessions from timing out. This is important for multicast applications where the multicast source is running on a UCS server.

Nice feature for large environments. Provides a wizard interface to help automate firmware upgrades across lots of servers and IOMs. You can still perform manual firmware upgrades as well. To use this feature to upgrade to 2.1 you must first manually upgrade UCS Manager to 2.1.

Cisco now supports running different versions infrastructure and server level firmware. This will allow you to run one version for all infrastructure components (UCSM, FIs, IOMs) and another for server level components (BIOS, CIMC, Adaptor).

The official Cisco description of these policies state that they are for more granular permission delegation to LAN/SAN admins. But these could also be used in place of vNIC/vHBA templates. These policies allow you to define a setup of vNICs/vHBAs with the adaptor policies. These LAN/SAN Connectivity policies are then tied to Service Profile templates or Service Profiles. This is a nice feature for new deployments using firmware 2.1, not so sure it will add much value to existing deployments unless you are building out a new Service Profile Template.

This is a nice operational enhancement that several customers have wanted for a while. This allows you to perform server maintenance without getting inundated with Call Home emails. There is an option to schedule the maintenance period or you can manually enter/exist fault suppression mode.

Scheduled backups

New operational policies to schedule full state and all configuration backups on a schedule. Requires a remote FTP, TFTP, SCP or SFTP server.

FSM Tab Enhancement

The FSM tab has additional details on what is going on under the covers. This will be very useful for troubleshooting.

Native JRE 64 bits Compatibility with OS and Browsers

This should provide better UCSM performance on x64 systems that have JRE x64 installed.

VCON Enhancement

Adds an option to vNIC/vHBA Placement policies to round-robin the vNIC/vHBA placement when there are multiple vConns/Mezzanine cards in the system.

I was unable to find any info on this feature. I think it has something to do with blacklisting DIMMs that you do not want to ever support in your system.

Inventory and Discovery Support for Fusion-IO and LSI PCIe

Fusion-IO has developed a mezzanine card for the new B200-M3s. This feature adds the hardware to the capability catalog so that UCSM knows what these cards are.

Mezzanine Flash Storage

Related to the above

Sequential Pool ID Assignment

This will make a lot of customer happy. In previous versions pool IDs pulled from the UUID, MAC, WWNN, WWPN and CIMC pools were not done in order. There wasn’t any rime or reason for the ID assignments. This drove a lot of people crazy. Now there is an option to enable sequential allocation for every pool.

For already installed systems you must go back through your pools and enable this option. For new installs you can select the sequential option when the pool is created.

RBAC Enhancement

Cisco finally produced detailed documentation on what each role privilege is allowed to do. You can find the documentation here – RBAC Enhancements

CIMC is included in Host Firmware Package

There is no longer a separate firmware update for the CIMC. it is now included in the Host Firmware Package policy.

6 thoughts on “Cisco UCS Firmware 2.1 – New Features Overview”

Regarding the DIMM blacklisting feature, it has no customer exposure at this time so it is not documented. It is being implemented in a phased manner and the initial release of UCS Firmware 2.1 has phase 1. Phase 2 would have customer visibility if it is decided to be implemented.

The idea is that currently if a DIMM produces an uncorrectable error during POST it is mapped out and not made available to the OS. However, currently a DIMM that has an uncorrectable error at runtime has significant impact (the host goes down) but is not mapped out and the OS can continue to access it on subsequent boots unless the DIMM later produces an uncorrectable error during POST and is thus mapped out during the existing POST-time DIMM mapping out functionality. A full implementation of DIMM blacklisting would cause the DIMMs that produce uncorrectable errors during runtime to be mapped out so they can not be used by the OS.

The phased approach is as follows:

Phase 1 – Save DIMM error statistics to each DIMM. Dump the statistics as binary data into server show tech detail output so we can analyze the potential impact of moving to phase 2. Also utilize TAC SR data and attachments to try and understand the potential impact of phase 2. This is where we are at today. UCS firmware 2.1 is saving DIMM error statistics to the DIMM itself and we are actively looking at that data in TAC SR’s that have show tech detail data from servers running 2.1.

Phase 2 – If we go forward with it will likely include the ability for DIMMs to be mapped out after runtime uncorrectable errors similar to how they are mapped out for POST-time uncorrectable errors.

I don’t imagine we would back out phase 1 because this data can be useful for other reasons. However, wether or not we go forward with phase 2 is still a topic under discussion.

So bottom-line: There is no customer visible feature (yet) surrounding DIMM blacklisting. Something that is customer visible may come in a future release.

Great overview of the new features, thank you for putting this together! I work in data center infrastructure software, and we’re always trying to keep up to date on the latest trends and feature updates to some of our competitors.