As simple as UCS might look from the outside, the configuration is a piece of hell! 6 tabs (Equipment, Servers, LAN, SAN, VM, Admin) where each contains a lot of different settings show up - and in the first few minutes you look puzzled at all the names like Service Profiles, Policies, Pools...

After some time of configuration with an expert, the settings - where to configure what - is now kinda 'ok'. Meaning, I now know the most important configuration stuff.

But: The whole chassis and therefore all the blade servers are not connected to ethernet. It seems that the uplink (a configured port-channel) doesn't work. The following error message proves it:

So we decided to destroy and recreate the port-channel. Since we have changed the SFP's the configuration might have had some problems with the change. We deleted the port members of the port-channel configuration and re-created the port-channel group, making sure we don't do any mistakes by using the official Cisco howto.

The funny thing is, that even in the official documentation the screenshot shows error messages. Take a look at this:

Cisco states here: 'As shown, the port channel that was created is successfully enabled.'. Even though there are errors shown that the port-channel has no operational members. As of now, our UCS is still down... and no solution could be found so far.

Thanks to Fabien to have found this "faulty" Cisco documentation :-).

Update 12:27: The ethernet issue has been fixed. It was due to the protocol which was used for the port-channel between an ethernet switch and the UCS Fabrics. It was decided to use the LACP protocol which now solved the problem.

On the Nexus side we only specified the ports which should be used for the port channel. No further configuration was done on UCS side. On our main switch (a Cisco 3750) we defined the port-channel like seen in my previous comment. The Nexus switches are connected kinda like this: