How To Install The Cisco Nexus 1000V on vSphere 5

Installing the Cisco Nexus 1000V distributed virtual switch is not that difficult, once you have learned some new concepts. Before I jump straight into installing the Nexus 1000V, lets run through the vSphere networking options and some of the reasons you’d want to implement the Nexus 1000V.

vSS (vSphere Standard Switch)

Often referred to as vSwitch0, the standard vSwitch is the default virtual switch vSphere offers you, and provides essential networking features for the virtualisation of your environment. Some of these features include 802.1Q VLAN tagging, egress traffic shaping, basic security, and NIC teaming. However, the vSS or standard vSwitch, is an individual virtual switch for each ESX/ESXi host and needs to be configured as individual switches. Most large environments rule this out as they need to maintain a consistent configuration across all of their ESX/ESXi hosts. Of course, VMware Host Profiles go some way to achieving this but it’s still lacking in what features in distributed switches.

vDS (vSphere Distributed Switch)

So the vDS, also known as DVS (Distributed Virtual Switch) provides a single virtual switch that spans all of your hosts in the cluster, which makes configuration of multiple hosts in the virtual datacenter far easier to manage. Some of the features available with the vDS includes 802.1q VLAN tagging as before, but also ingress/egress traffic shaping, PVLANs (Private VLANs), and network vMotion. The key with using a distributed virtual switch is that you only have to manage a single switch.

Cisco Nexus 1000V

In terms of features and manageability, the Nexus 1000V is over and above the vDS as it’s going to be so familiar to those with existing Cisco skills, in addition to a heap of features that the vDS can’t offer. For example, QoS tagging, LACP, and ACLs (Access Control Lists). Recently I have come across two Cisco UCS implementations which require the Nexus 1000V to support PVLANs in their particular configuration (due to the Fabric Interconnects using End-Host Mode). There are many reasons one would choose to implement the Cisco Nexus 1000V, lets call it N1KV for short :)

Without further delay, grab a coffee and we’ll get the N1KV installed!

Components of the Cisco Nexus 1000V on VMware vSphere

There are two main components of the Cisco Nexus 1000V distributed virtual switch; the VSM (Virtual Supervisor Module) and the VEM (Virtual Ethernet Module). If you are familiar with Cisco products and have worked with physical Cisco switches, then you will already know what the supervisor module and ethernet modules are. In essence, a distributed virtual switch, whether we are talking about the vSphere (vDS) or N1KV have a common architecture. That is the control and data plane, which is what makes it ‘distributed’ in the first place. By separating the control plane (VSM), and the data plane (VEM), a distributed switch architecture is possible as illustrated in the diagram here (left).

Another similarity that is the use of port groups. You should be familiar with port groups as they are present on both the VMware vSS and vDS. In Cisco terms, we’re talking about ‘port profiles’, and they are configured with the relevant VLANs, QoS, ACLs, etc. Port profiles are presented to vSphere as a port group.

Installing the Cisco Nexus 1000V

Note: you will need to register for a Cisco account in order to download the evaluation.

vSphere environment with vCenter.

Note: I’m using my vSphere 5 lab for this exercise but vSphere 4.1 will do fine.

At least one ESX/ESXi host, preferably two or more!

If you are using a lab environment and don’t have the physical hardware available then create a virtual ESXi server (this post by VCritical details how to do this).

You’ll also need to create the following VLANs:

Control

Management

Packet

Note: If you are doing this in a lab environment then you can place all of the VLANs into a single VM network, but in production make sure you have separate VLANs for these.

In the latest release of the Nexus 1000V the Java based installer, which we will come on to in a moment, now deploys the VSM (or two VSMs in HA mode) to vCenter and a GUI install wizard guides you through the steps. This has made deployment of the N1KV even easier than before.

Once you have downloaded the Nexus 1000V from the Cisco website, continue on to the installation steps.

Installation Steps:

1. Extract the .zip file you downloaded from Cisco, and navigate to VSM\Installer_App\Nexus1000V-install.jar. Open this (you need Java installed) and it will launch the installation wizard. Enter the vCenter IP address, along with a username and password.

2. Select the vSphere host where the VSM resides and click Next.

3. Select the OVA (in the VSM\Install directory), system redundancy option, virtual machine name and datastore, then click Next.

Note: This step is new, previously you had to deploy the OVA first, then run this wizard. If you choose HA as the redundancy option, it will append -1 or -2 to the virtual machine name.

Note: In my home lab, I just created three port groups to illustrate this. Obviously in production you would typically have these VLANs defined, otherwise you can create new ones here on the Nexus 1000V.

Note: The domain ID is common between the VSMs in HA mode, but you will need a unique domain ID if running multiple N1KV switches. For example, set the domain ID to 10. The native VLAN should be set to 1 unless otherwise specified by your network administrator.

6. You can now review your configuration. If it’s all correct, click Next.

7. The installer will now start deploying your VSM (or pair if using HA) with the configuration settings you entered during the wizard.

8. Once it has deployed you’ll get an option to migrate this host and networks to the N1KV. Choose No here as we’ll do this later.

9. Finally you’ll get the installation summary, and you can close the wizard.

You’ll now see two Nexus 1000V VSM virtual machines in vCenter on your host. In a production environment you would typically have the VSMs on separate hosts for resilience. Within vCenter, if you navigate to Inventory > Networking you should now see the Nexus 1000V switch:

What we are actually doing here is installing the VEM on each of your ESX/ESXi hosts. In the real world I prefer to use VMware Update Manager (VUM) to do this, as it will automatically add the VEM to a host when it is added to the N1KV virtual switch. However, for this tutorial I will show you how to add the VEM using the command line with ESXi 5.

1. Open a web browser and open the Nexus 1000V web page, http://<IP_ADDRESS>. You will then be presented with the Cisco Nexus 1000V extension (xml file) and the VEM software. It’s the VEM we are interested in here, so download the VIB that corresponds to your ESX/ESXi build.

2. Copy the VIB file on to your ESX/ESXi host. You must place this into /var/log/vmware as ESXi 5 expects the VIB to be present there.

Note: Use the datastore browser in vCenter to do this.

3. Log into the ESXi console either directly or using SSH (if it is enabled) and enter the following command:

Configuring the Nexus 1000V

Before we add our hosts to the Nexus 1000V we’ll need to create the port profiles, including the uplink port profile. The uplink port profile will be selected when we add our hosts to the switch, and this will typically be a trunk port containing all of the VLANs we wish to trunk to the hosts.

Adding ESX/ESXi Hosts to the Cisco Nexus 1000V

2. Select the vmnic(s) of the host(s) you want to add and choose the VM_Uplink in the dropdown (we created this in the last step) and click Next.

Note: You’ll notice in the above screenshot that I’m adding a spare vmnic as I don’t want to lose connectivity with my standard vSwitch.

3. Migrate your port groups to the Nexus 1000V, such as the Management (vmk). Click Next.

Note: I chose not to do this, this can be done later.

4. You will then have the opportunity to migrate your virtual machines to the N1KV. This is optional and can be done later. Click Next.

5. Review the summary and click Finish.

Summary

We have just downloaded and installed the Cisco Nexus 1000V, installed the VSMs to vCenter, installed the VEM to your host and added the host to the Cisco Nexus 1000V switch. The next steps are to configure the Nexus 1000V, port profiles, etc.

Common Questions:

How many Cisco Nexus 1000V virtual switches can be added to vCenter?

vCenter can connect to up to 32 Distributed Virtual Switches, this includes the Nexus 1000V. You’ll need a VSM (or pair for redundancy) for each N1KV switch.

Comments

I noticed you installed the device in L2 mode. From what I’ve seen in the field, L2 mode is the most common; however, with the introduction of VXLAN in SV1(5.1), things are slowly moving in the direction of L3.

(In my opinion, L3 is easier to configure from the start than L2. Migrating from L2->L3 can be problematic though.)

Great point, I skipped over the choice of L2/L3 (leaving the default) as it’s so new. Worth covering though, so thanks for highlighting it. My customers have already hinted at using (or investigating) VXLANs for multi-tenant environments in the Cloud (E.g. VMware vCloud Director) so maybe a good topic for another blog post :)

Trackbacks

[…] the 1000V virtual switch, then you might want to read the guide I published back in April 2012 on How to Deploy the Cisco Nexus 1000V. For now, grab a coffee and let’s begin with load-balancing […]