The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command or packet capture setup.

VM-FEX Overview

VM-FEX combines virtual and physical networking into a single infrastructure. It allows you to provision, configure, and manage virtual machine network traffic and bare metal network traffic within a unified infrastructure.

The VM-FEX software extends the Cisco fabric extender technology to the virtual machine with these capabilities:

Each virtual machine includes a dedicated interface on the parent switch.

All virtual machine traffic is sent directly to the dedicated interface on the switch.

The standard vSwitch in the hypervisor is eliminated.

VM-FEX is one type of Distributed Virtual Switch (DVS or VDS). The DVS presents an abstraction of a single switch across multiple ESX servers that are part of the same Datacenter container in vCenter. The Virtual Machine (VM) Virtual Network Interface Controller (VNIC) configuration is maintained from a centralized location (Nexus 5000 or UCS in VM-FEX, this document illustrates the Nexus 5000-based VM-FEX).

VM-FEX can operate in two modes:

Pass-through: This is the default mode, in which the VEM is involved in the data path for the VM traffic.

High-performance: VM traffic is not handled by the VEM but is passed directly to the Network IO Virtualization (NIV) adapter.

In order to use the high-performance mode, it should be requested by the port-profile configuration and should be supported by the VM Operating System and by its virtual adapter. More information about this is provided later in this document.

Definitions

Network IO Virtualization (NIV) uses VNtagging in order to deploy several Virtual Network Links (VN-Link) across the same physical Ethernet channel

Datacenter Bridging Capability Exchange (DCBX)

VNIC Interface Control (VIC)

Virtual NIC (VNIC), which indicates a host endpoint. It can be associated with an active VIF or a standby VIF

Distributed Virtual Port (DVPort). VNIC is connected to the DVPort in the VEM

NIV Virtual Interface (VIF), which is indicated at a network endpoint

Virtual Ethernet (vEth) interface represents VIF at the switch

Pass-Through Switch (PTS). VEM module installed in the hypervisor

Note: The VEM used in VM-FEX is similar to the VEM used with the Nexus 1000v. The difference is that in VM-FEX, the VEM operates in pass-through mode and does not perform local switching between VMs on the same ESX.

Configure

The topology is a UCS-C server with P81E VIC dual-homed to two Nexus 5548 VPC switches.

Network Diagram

The VPC is configured and initialized properly between the two Nexus 5000 switches.

VMWare vCenter is installed and connected to via a vSphere client.

ESXi is installed on the UCS-C server and added to vCenter.

Configuration steps are summarized here:

Enable NIV mode on the server Adapter:

Connect to the Cisco Integrated Management Controller (CIMC) interface via HTTP and log in with the admin credentials.

Choose Inventory > Network Adapters > Modify Adapter Properties.

Enable NIV Mode, set the number of VM FEX interfaces, and save the changes.

Power off and then power on the server.

After the server comes back online, verify that NIV is enabled:

Create two static vEths on the server.

In order to create two VNICs, choose Inventory > Network Adapters > VNICs > Add. These are the most important fields to be defined:

VIC Uplink port to be used (P81E has two uplink ports referenced as 0 and 1).

Channel number: This is a unique channel ID of the VNIC on the adapter. This is referenced in the bind command under the vEth interface on the Nexus 5000. The scope of the channel number is limited to the VNTag physical link. The channel can be thought of as a "virtual link" on the physical link between the switch and the server adapter.

Port-profile: The list of port-profiles defined on the upstream Nexus 5000 can be selected. A vEth interface is automatically created on the Nexus 5000 if the Nexus 5000 is configured with the vEthernet auto-create command. Note that only the vEthernet port-profile names are passed to the server (port-profile configuration is not). This occurs after the VNTag link connectivity is established and the initial handshake and negotiation steps are performed between the switch and the server adapter.

Enable Uplink failover: The VNICs failover to the other P81E uplink port if the configured uplink port goes offline.

Note: All of the switch configurations shown next should be configured on both of the Nexus 5500 VPC peers, except the Software Virtual Switch (SVS) connect command and the XML extension key, which should be done on the VPC primary switch only.

(Optional) Allow the Nexus 5000 to auto-create its Vethernet interfaces when thecorresponding vNICs are defined on the server: (config)# vethernet auto-create

Enable VNTag on host interfaces.

Configure the N5k interface that connects to the servers in VNTAG mode:(config)# interface Eth 1/1(config-if)# switchport mode vntag(config-if)# no shutdown

Bring up static vEths.

On both Nexus 5500 switches, enable the static vEth virtual interfaces that should connect to the two static VNICs enabled on the server VIC.

On the Nexus 5548-A, enter:

interface vethernet 1 bind interface eth 1/1 channel 10no shutdown

On the Nexus 5548-B, enter:

interface vethernet 2bind interface eth 1/1 channel 11no shutdown

Alternatively, these vEth interfaces can be automatically created with the vethernet auto-create command.

Note: In case of topologies that involve dual-homed servers to Active/Active FEX modules, the server VNICs should have uplink failover enabled, and the switch vEthernet interfaces have two bind interface commands (once per each FEX Host Interface (HIF) port that the server is connected to). The vEthernet interface is either active or standby on each Nexus 5000 switch.

Note: The dvs-name all command defines to which DVS switch in vCenter this port-profile should be exported as a port-group. Use the option all command in order to export the port-group to all DVS switches in the Datacenter.

VM High-Performance Mode

In order to implement High-Perfomance mode (DirectPath IO) and bypass the hypervisor for the VM traffic, configure the vEthernet port-profile with the high-performance host-netio command. In the case of VPC topologies, the port-profile should be always edited on both VPC peer switches. For example:

port-profile type vethernet VM2high-performance host-netio

In order to have the high-performance mode operational, your VM must have these additional prerequisites:

Here is how you verify High-Performance mode (DirectPath IO) when it is used. Under the VM Hardware settings, the DirectPath I/O field in the right menu shows as active when VM High-Performance mode is in use and as inactive when the default VM pass-through mode is in use.

Register the VPC primary Nexus 5548 in vCenter:

Note: In VPC topologies, the primary VPC switch pushes the extension key pair to vCenter as well as the port-profiles. The extension key is synchronized by the primary VPC peer to the secondary VPC peer. This is later verified with the show svs connectioncommand, which reports the same extension-key on both peers. If the two Nexus 5500 were not VPC peers, then the extension key configured would be different for each switch and each switch would have to establish separate SVS connections to the vCenter.

If you encounter unknown mode, make sure to enable uplink failover mode on the VNIC. Also make sure that the channel number that you specified in the CIMC matches the channel number that is specified in the vEthernet configuration.