For information about your software and platform compatibility, see Cisco Nexus 1000V and VMware Compatibility Information.

Prerequisites for the Upgrade

Before You Begin

The Upgrade Application cannot be used for the upgrade of the Virtual Supervisor Modules (VSMs) from Release 4.2(1)SV1(4) to the current release.

A pair of VSMs in a high availability (HA) pair is required in order to support a nondisruptive upgrade.

A system with a single VSM can only be upgraded in a disruptive manner.

The network and server administrators must coordinate the upgrade procedure with each other.

The upgrade process is irrevocable. After the software is upgraded, you can downgrade by removing the current installation and reinstalling the software. For more information, see the “Recreating the Installation” section of the Cisco Nexus 1000V Troubleshooting Guide.

A combined upgrade of ESX and the VEM in a single maintenance mode is supported in this release. A combined upgrade requires at least vCenter 5.0 Update 1 whether you upgrade manually or are using the VMware Update Manager.

You can manually upgrade the ESX and VEM in one maintenance mode as follows:

Place the host in maintenance mode.

Upgrade ESX to 4.1 or 5.0 as needed.

Install the VEM VIB while the host is still in maintenance mode.

Remove the host from maintenance mode.

The steps for the manual combined upgrade procedure do not apply for VUM-based upgrades.

You can abort the upgrade procedure by pressing Ctrl-C.

Prerequisites for Upgrading VSMs

Upgrading VSMs has the following prerequisites:

Close any active configuration sessions before upgrading the Cisco Nexus 1000V software.

Save all changes in the running configuration to the startup configuration, to be preserved through the upgrade.

Save a backup copy of the running configuration in external storage.

Perform a VSM backup. For more information, see the “Configuring VSM Backup and Recovery” chapter in the Cisco Nexus 1000V System Management Configuration Guide.

Use the VSM management IP address to log into VSM and perform management tasks.

Important:

If you connect to a VSM using the VSA serial port or the "connect host" from the CIMC, do not initiate commands that are CPU intensive, such as copying image from tftp server to bootflash or generate a lot of screen output or updates. Use the VSA serial connections, including CIMC, only for operations such as debugging or basic configuration of the VSA.

Prerequisites for Upgrading VEMs

Caution

If the VMware vCenter Server is hosted on the same ESX/ESXi host as a Cisco Nexus 1000V VEM, a VMware Update Manager (VUM)-assisted upgrade on the host will fail. You should manually VMotion the vCenter Server VM to another host before you perform an upgrade.

Note

When you perform any VUM operation on hosts that are a part of a cluster, ensure that VMware HA, VMware fault tolerance (FT), and VMware Distributed Power Management (DPM) features are disabled for the entire cluster. Otherwise, VUM will fail to install the hosts in the cluster.

You are logged in to the VSM command-line interface (CLI) in EXEC mode.

You have a copy of your VMware documentation available for installing software on a host.

You have already obtained a copy of the VEM software file from one of the sources listed in VEM Software. For more information, see the Cisco Nexus 1000V and VMware Compatibility Information.

If you need to migrate a vSphere host from ESX to ESXi, do it before the Cisco Nexus 1000V upgrade.

You have placed the VEM software file in /tmp on the vSphere host. Placing it in the root (/) directory might interfere with the upgrade. Make sure that the root RAM disk has at least 12 MB of free space by entering the vdf command.

On your upstream switches, you must have the following configuration.

On Catalyst 6500 Series switches with the Cisco IOS software, enter one of the following commands:
(config-if) portfast trunk
or
(config-if) portfast edge trunk

On your upstream switches, we highly recommend that you globally enable the following:

Global BPDU Filtering

Global BPDU Guard

On your upstream switches where you cannot globally enable BPDU Filtering and BPDU Guard, we highly recommend that you enter the following commands:

(config-if) spanning-tree bpdu filter

(config-if) spanning-tree bpdu guard

For more information about configuring spanning tree, BPDU, or PortFast, see the documentation for your upstream switch.

Guidelines and Limitations for Upgrading the Cisco Nexus 1000V

Before attempting to migrate to any software image version, follow these guidelines:

Caution

During the upgrade process, the Cisco Nexus 1000V does not support any new additions such as modules, Virtual NICs (vNICs), or VM NICs and does not support any configuration changes. VM NIC and vNIC port-profile changes might render VM NICs and vNICs in an unusable state.

Note

vSphere 5.0 Update 1 or later is recommended over vSphere 5.0.

You are upgrading the Cisco Nexus 1000V software to the current release.

Scheduling — Schedule the upgrade when your network is stable and steady. Ensure that everyone who has access to the switch or the network is not configuring the switch or the network during this time. You cannot configure a switch during an upgrade.

Hardware — Avoid power interruptions to the hosts that run the VSM VMs during any installation procedure.

Connectivity to remote servers — do the following:

Copy the kickstart and system images from the remote server to the Cisco Nexus 1000V.

Ensure that the switch has a route to the remote server. The switch and the remote server must be in the same subnetwork if you do not have a router to route traffic between subnets.

Software images — do the following:

Make sure that the system and kickstart images are the same version.

Retrieve the images in one of two ways:

Locally—Images are locally available on the upgrade CD-ROM/ISO image.

Remotely—Images are in a remote location and you specify the destination using the remote server parameters and the filename to be used locally.

Commands to use — do the following:

Verify connectivity to the remote server by using the ping command.

Use the one-step install all command to upgrade your software. This command upgrades the VSMs.

Do not enter another install all command while running the installation. You can run commands other than configuration commands.

During the VSM upgrade, if you try to add a new VEM or any of the VEMs are detached due to uplink flaps, the VEM attachment is queued until the upgrade completes.

Note

If the ESX hosts are not compatible with the software image that you install on the VSM, a traffic disruption occurs in those modules, depending on your configuration. The install all command output identifies these scenarios. The hosts must be at the right version before the upgrade.

Before upgrading the VEMs, note these guidelines and limitations:

The VEM software can be upgraded manually using the CLI or upgraded automatically using VUM.

During the VEM upgrade process, VEMs reattach to the VSM.

Connectivity to the VSM can be lost during a VEM upgrade when the interfaces of a VSM VM connect to its own Distributed Virtual Switch (DVS).

If you are upgrading a VEM using a Cisco Nexus 1000V bundle, follow the instructions in your VMware documentation. For more details about VMware bundled software, see the Cisco Nexus 1000V and VMware Compatibility Information.

With ESX and ESXi 4.1, after the upgrade, the esxupdate --vib-view query command might show two Cisco VIBs as installed. If the upgrade has otherwise been successful, you can ignore this condition.

Caution

Do not enter the vemlog, vemcmd, or vempkt commands during the VEM upgrade process because these commands impact the upgrade.

Note

For the ESXi 5.1 release (799733), the minimum versions are as follows:

VMware vCenter Server 5.1, 799731

VMware Update Manager 5.1, 782803

For the ESXi 5.0.0 release, the minimum versions are as follows:

VMware vCenter Server 5.0.0, 455964

VMware Update Manager 5.0.0 432001

If you plan to do a combined upgrade of ESX and VEM, the minimum vCenter Server/VUM version required is 623373/639867.

For the ESX/ESXi 4.1.0 release and later, the minimum versions are as follows:

VMware vCenter Server 4.1.0, 258902

VMware Update Manager 4.1.0 256596

This procedure is different from the upgrade to Release 4.2(1)SV1(4). In this procedure, you upgrade the VSMs first by using the install all command and then you upgrade the VEMs.

The following figure shows the workflow for a Cisco Nexus 1000V only upgrade.

Figure 1. Cisco Nexus 1000V Only Upgrade

Combined Upgrade of vSphere and Cisco Nexus 1000V

You can perform a combined upgrade of vSphere and Cisco Nexus 1000V.

If any of the hosts are running ESX 4.0 when the VSM is upgraded, the installer command displays that some VEMs are incompatible. You can proceed if you are planning a combined upgrade of the Cisco Nexus 1000V and ESX after the VSM upgrade completes.

Note

A combined upgrade is supported only for vCenter Server 5.0 Update 1 or later.

Software Images

The software image install procedure is dependent on the following factors:

Software images—The kickstart and system image files reside in directories or folders that you can access from the Cisco Nexus 1000V software prompt.

Image version—Each image file has a version.

Disk—The bootflash: resides on the VSM.

ISO file—If a local ISO file is passed to the install all command, the kickstart and system images are extracted from the ISO file.

In-Service Software Upgrades on Systems with Dual VSMs

Note

Performing an In-Service Software Upgrade (ISSU) from Cisco Nexus 1000V Release 4.2(1)SV1(4) or Release 4.2(1)SV1(4a) to the current release of Cisco Nexus 1000V using ISO files is not supported. You must use kickstart and system files to perform an ISSU upgrade to the current release of Cisco Nexus 1000V.

The Cisco Nexus 1000V software supports in-service software upgrades (ISSUs) for systems with dual VSMs. An ISSU can update the software images on your switch without disrupting data traffic. Only control traffic is disrupted. If an ISSU causes a disruption of data traffic, the Cisco Nexus 1000V software warns you before proceeding so that you can stop the upgrade and reschedule it to a time that minimizes the impact on your network.

Note

On systems with dual VSMs, you should have access to the console of both VSMs to maintain connectivity when the switchover occurs during upgrades. If you are performing the upgrade over Secure Shell (SSH) or Telnet, the connection will drop when the system switchover occurs, and you must reestablish the connection.

An ISSU updates the following images:

Kickstart image

System image

VEM images

All of the following processes are initiated automatically by the upgrade process after the network administrator enters the install all command.

ISSU Process for the Cisco Nexus 1000V

The following figure shows the ISSU process.

Figure 3. ISSU Process

ISSU VSM Switchover

The following figure provides an example of the VSM status before and after an ISSU switchover.

Figure 4. Example of an ISSU VSM Switchover

ISSU Command Attributes

Support

The install all command supports an in-service software upgrade (ISSU) on dual VSMs in an HA environment and performs the following actions:

Determines whether the upgrade is disruptive and asks if you want to continue.

Copies the kickstart and system images to the standby VSM. Alternatively, if a local ISO file is passed to the install all command instead, the kickstart and system images are extracted from the file.

Sets the kickstart and system boot variables.

Reloads the standby VSM with the new Cisco Nexus 1000V software.

Causes the active VSM to reload when the switchover occurs.

Benefits

The install all command provides the following benefits:

You can upgrade the VSM by using the install all command.

You can receive descriptive information on the intended changes to your system before you continue with the installation.

You have the option to cancel the command. Once the effects of the command are presented, you can continue or cancel when you see this question (the default is no):
Do you want to continue (y/n) [n]: y

You can upgrade the VSM using the least disruptive procedure.

You can see the progress of this command on the console, Telnet, and SSH screens:

After a switchover process, you can see the progress from both the VSMs.

Before a switchover process, you can see the progress only from the active VSM.

The install all command automatically checks the image integrity, which includes the running kickstart and system images.

The install all command performs a platform validity check to verify that a wrong image is not used.

The Ctrl-C escape sequence gracefully ends the install all command. The command sequence completes the update step in progress and returns to the switch prompt. (Other upgrade steps cannot be ended by using Ctrl-C.)

After running the install all command, if any step in the sequence fails, the command completes the step in progress and ends.

Log in to Cisco.com to access the links provided in this document. To log in to Cisco.com, go to the URL http:/​/​www.cisco.com/​ and click Log In at the top of the page. Enter your Cisco username and password.

We recommend that you have the kickstart and system image files for at least one previous release of the Cisco Nexus 1000V software on the system to use if the new image files do not load successfully.

Delete any unnecessary files to make space available if you need more space on the standby VSM.

Step 9

If you plan to install the images from the bootflash:, copy the Cisco Nexus 1000V kickstart and system images or the ISO image to the active VSM by using a transfer protocol. You can use ftp:, tftp:, scp:, or sftp:. The examples in this procedure use scp:.

Note

When you download an image file, change to your FTP environment IP address or DNS name and the path where the files are located.

Read the release notes for the related image file. See the Cisco Nexus 1000V Release Notes.

Step 12

Determine if the Virtual Security Gateway (VSG) is configured in the deployment:

If the following output is displayed, the Cisco VSG is configured in the deployment. You must follow the upgrade procedure in the “Complete Upgrade Procedure” section in Chapter 7, “Upgrading the Cisco Virtual Security Gateway and Cisco Virtual Network Management Center” of the Cisco Virtual Security Gateway and Cisco Virtual Network Management Center Installation and Upgrade Guide.

As part of the upgrade process, the standby VSM is reloaded with new images. Once it becomes the HA standby again, the upgrade process initiates a switchover. The upgrade then continues from the new active VSM with the following output:

In the License Agreement window, click the I agree to the terms in the license agreement radio button.

Step 9

Click Next.

Step 10

In the Database Options screen, click Next.

Step 11

Click the Upgrade existing vCenter Server database radio button and check the I have taken a backup of the existing vCenter Server database and SSL certificates in the folder: C:\ProgramData\VMware\VMware VirtualCenter\SSL\. check box.

Step 12

From the Windows Start Menu, click Run.

Step 13

Enter the name of the folder that contains the vCenter Server database and click OK.

Step 14

Drag a copy of the parent folder (SSL) to the desktop as a backup.

Step 15

Return to the installer program.

Step 16

Click Next.

Step 17

In the vCenter Agent Upgrade window, click the Automatic radio button.

Step 18

Click Next.

Step 19

In the vCenter Server Service screen, check the Use SYSTEM Account check box.

Step 20

Click Next.

Step 21

Review the port settings and click Next.

Step 22

In the vCenter Server JVM Memory screen based on the number of hosts, click the appropriate memory radio button.

Step 23

Click Next.

Step 24

Click Install.

Step 25

Click Finish.

This step completes the upgrade of the vCenter Server.

Step 26

Upgrade the VMware vSphere Client to ESXi 5.1.

Step 27

Open the VMware vSphere Client.

Step 28

From the Help menu, choose About VMware vSphere.

Step 29

Confirm that the vSphere Client and the VMware vCenter Server are both version VMware 5.1.

If the ESX/ESXi host is using ESX/ESXi 4.1.0 or a later release and your DRS settings are enabled to allow it, VUM automatically VMotions the VMs from the host to another host in the cluster and places the ESX/ESXi in maintenance mode to upgrade the VEM. This process is continued for other hosts in the DRS cluster until all the hosts are upgraded in the cluster. For details about DRS settings required and vMotion of VMs, visit the VMware documentation related to Creating a DRS Cluster.

The lines with the bold characters in the preceding example display that all VEMs are upgraded to the current release.

The upgrade is complete.

Accepting the VEM Upgrade

Before You Begin

The network and server administrators must coordinate the upgrade procedure with each other.

You have received a notification in the vCenter Server that a VEM software upgrade is available.

Procedure

Step 1

In the vCenter Server, choose Inventory > Networking.

Step 2

Click the vSphere Client DVS Summary tab to check for the availability of a software upgrade.

Figure 7. vSphere Client DVS Summary Tab

Step 3

Click Apply upgrade.

The network administrator is notified that you are ready to apply the upgrade to the VEMs.

Manual Upgrade Procedures

Upgrading the VEM Software Using the vCLI

You can upgrade the VEM software by using the vCLI.

Before You Begin

If you are using vCLI, do the following:

You have downloaded and installed the VMware vCLI. For information about installing the vCLI, see the VMware vCLI documentation.

You are logged in to the remote host where the vCLI is installed.

Note

The vSphere command-line interface (vCLI) command set allows you to enter common system administration commands against ESX/ESXi systems from any machine with network access to those systems. You can also enter most vCLI commands against a vCenter Server system and target any ESX/ESXi system that the vCenter Server system manages. vCLI commands are especially useful for ESXi hosts because ESXi does not include a service console.

If you are using the esxupdate command, you are logged in to the ESX host.

If the ESX host is hosting the VSM, coordinate with the server administrator to migrate the VSM to a host that is not being upgraded. Proceed to Step 5.

Step 5

Initiate the Cisco Nexus 1000V Bundle ID upgrade process.

Note

If VUM is enabled in the vCenter environment, disable it before entering the vmware vem upgrade proceed command to prevent the new VIBs from being pushed to all the hosts.

Enter the vmware vem upgrade proceed command so that the Cisco Nexus 1000V Bundle ID on the vCenter Server gets updated. If VUM is enabled and you do not update the Bundle ID, an incorrect VIB version is pushed to the VEM when you next add the ESX to the VSM.

switch# vmware vem upgrade proceed

Note

If VUM is not installed, the “The object or item referred to could not be found” error appears in the vCenter Server’s task bar. You can ignore this error message.

Simplified Upgrade Process

Combined Upgrade

You can upgrade the VEM and ESX version simultaneously. It requires vSphere version 5.0 Update1 and later versions. It is supported in Cisco Nexus 1000V Release 4.2(1)SV1(5.2) and later. This upgrade can be implemented manually or by using VUM.

Selective Upgrade

You can upgrade a selective set of VEMs and a few hosts or clusters at a time in a single maintenance window. This enables incremental upgrades during short maintenance windows. It is supported with combined upgrades of VEM and ESX, and also with manual upgrades of VEMs only. It is supported for VUM-based combined upgrades with select hosts or clusters using the GUI. It is not supported with VUM-based upgrades of VEMs alone. To upgrade manually using this procedure follow these general steps:

Identify the cluster or set of hosts in a cluster

Place the selected hosts in maintenance mode (to vacate the VMs)

Upgrade the VEM image on the hosts using the manual command or scripts

Take the hosts out of maintenance mode, allowing Distributed Resource Scheduler (DRS) to rebalance VMs

Background Upgrade

You can upgrade VEMs without a maintenance window for VEMs. You use the manual procedure to upgrade VEMs during production. Place the host in maintenance mode, upgrade the VEM, and remove the host from the maintenance mode. You do not have to shut off HA Admission Control and such (as you would during VUM upgrades). You must ensure the spare capacity in the cluster and perform a health check before the upgrade. To upgrade using this procedure follow these general steps:

Upgrade the VSM first as usual. This may be done in a maintenance window

Place one host at a time in maintenance mode (to vacate the VMs)

Upgrade the VEM image on that host using manual commands or scripts

Take the host out of maintenance mode, allowing the DRS to rebalance the VMs.

Repeat the same procedure for every host in the DVS.

Note

Make sure there is enough spare capacity for HA and that all required ports have system profiles (such as mgmt vmk). Check the host health before upgrading.

Extended Upgrade

You can modify configurations between the upgrade maintenance windows. VSM configuration changes are allowed where you can add or remove modules, port configurations, VLANs, and other similar changes. If a set of hosts are upgraded to the latest VEM version using the Selective Upgrade or the Background Upgrade, the remaining set of hosts will remain in older VEM versions. During that time, various Cisco Nexus 1000V configuration changes are allowed between maintenanance windows.

Note

Do not make configuration changes during a maintenance window when the VEMs are being upgraded.

The list of allowed configuration changes are as follows:

Add or remove modules

Add or remove ports (ETH and VETH)

Shut or no-shut a port

Migrate ports to or from a vswitch

Change port modes (trunk or access) on ports

Add or remove port profiles

Modify port profiles to add or remove specific features such as VLANS, ACLs, QoS, or PortSec.

Change port channel modes in uplink port profiles

Add or delete VLANs and VLAN ranges

Add or delete static MACs in VEMs

Note

Queuing configuration changes are not supported on QoS.

Upgrading from Releases 4.0(4)SV1(3, 3a, 3b, 3c, 3d) to the Current Release

Layer 3 Advantages

The following lists the advantages of using a Layer 3 configuration over a Layer 2 configuration:

The VSM can control the VEMs that are in a different subnets.

The VEMs can be in different subnets.

Because the VEMs can be in different subnets, there is no constraint on the physical location of the hosts.

Minimal VLAN configurations are required for establishing the VSM-VEM connection when compared to Layer 2 control mode. The IP address of the VEM (Layer 3 capable vmknic’s IP address) and the VSM’s control0/mgmt0 interface are the only required information.

In the VSM, either the mgmt0 or the control0 interface can be used as the Layer 3 control interface. If mgmt0 is used, there is no need for another IP address as the VSM’s management IP address is used for VSM-VEM Layer 3 connection.

If the management VMKernel (vmk0) is used as the Layer 3 control interface in the VEM, there is no need for another IP address because the host’s management IP address is used for VSM-VEM Layer 3 connectivity.

Note

These advantages are applicable only for ESX-Visor hosts. On ESX-Cos hosts, a new VMKernel must be created.

Layer 2 to 3 Conversion Tool

About VSM-VEM Layer 2 to 3 Conversion Tool

Use the VSM-VEM Layer 2 to 3 Conversion Tool as an optional, simplified method to migrate from Layer 2 to Layer 3 mode. The tool enables you to do the following:

Check whether the prerequisites are met for the
migration from L2 to L3 mode.

Migrate the VSM from Layer 2 to Layer 3 Mode, with user interaction.
In the process of migration, the tool creates a port profile. You can use port profiles to configure interfaces, which you can assign to other interfaces to give them the same configuration. The VSM-VEM Layer 2 to 3 Conversion Tool also gives you the option of retrieving the IP addresses from a local file (static).

Prerequisites for Using VSM-VEM Layer 2 to 3 Conversion Tool

The L2-L3_CT.zip file contains the applications required to run VSM-VEM Layer 2 to 3 Conversion Tool

This creates a port profile with the required configuration. You can select this port profile when prompted by the tool. The migration tool checks for connectivity between VSM, vCenter, and VEM modules. Wait for the message to display that all connectivity is fine.

Step 10

Enter yes to continue when asked if you want to continue.

The migration tool proceeds to create an extract .csv file.

Step 11

Open the extract.csv file (in C:\Windows\Temp).

Step 12

Enter the vmknic IP details at the end of the text, delimited by semicolons, and save the file as convert.csv.

Step 13

Press any key to continue.

Step 14

Enter yes to confirm when asked if you are sure you completed the required steps.

Step 15

Enter the VSM password.

Step 16

Enter the vCenter password.

The migration tool connects to the vCenter and VSM of the user.

Step 17

Enter yes to confirm when asked if you want to continue.

The migration process continues.

Step 18

Enter the port profile name from the list of port profiles that appears at the prompt.

Once the port profile is selected, the max port value is automatically changed to 128.

Step 19

Enter yes to confirm when asked if you have updated convert.csv file as per the instructions.

Step 20

Enter yes to confirm, when asked if you want to continue.

The tool checks the connectivity between VSM, vCenter, and VEM modules. A message is displayed that the addition to vmknics are successful and all connectivity is fine. The VmkNicAddingToHost window will remain open until the configuration is complete.

Step 21

Enter yes to confirm that you would like to proceed with mode change from L2 to L3.

Step 22

Enter yes to confirm when asked if you wish to continue.

Wait for the SUCCESSFULLY COMPLETED MIGRATION message to display. The migration from layer 2 to layer 3 is now complete. The operating mode should now be listed as L3.

Using
Extract Mode

You can use Extract Mode to extract the attached VEM states and save them to the
Extract.csv file, which is located in
C:\Windows\Temp.

Procedure

Command or Action

Purpose

Step 1

Choose extract mode when prompted by VSM-VEM Layer 2 to 3 Conversion Tool.
You can now view the data in the Extract.csv file in the Windows
temp folder of your workstation.

Creating a Port Profile with Layer 3 Control Capability

Before You Begin

You are creating a port profile with Layer 3 control capability.

Allow the VLAN that you use for VSM to VEM connectivity in this port profile.

Configure the VLAN as a system VLAN.

Note

VEM modules will not register to the VSM before a vmkernel interface (vmk) is migrated to a Layer 3 control capable port profile. You must migrate a vmk to the Layer 3 port profile after migrating host vmnics to Ethernet port profiles. Migrate your management vmkernel interface into the Layer 3 capable port profile. Do not use multiple vmkernel interfaces on the same subnet.

The management vmkernel can also be used as a Layer 3 control interface. For ESX-Visor hosts only. Migrate your management vmkernel interface into the Layer 3 capable port profile. Do not use multiple vmkernel interfaes on the same subnet.

After entering the svs connection toVC command, the module is detached and reattached in Layer 3 mode. If this delay is more than six seconds, a module flap occurs. This does not affect the data traffic.