As a reference model, Cisco VMDC 2.1 is both flexible and extensible, and may need to be extended or modified to meet the requirements of a specific enterprise data center network.

Functional Components

The Cisco VMDC 2.1 data center network design is based on a proven layered approach, which has been tested and improved over the past several years in some of the largest data center implementations in the world. The layered approach is the basic foundation of the data center design that seeks to improve scalability, performance, flexibility, resiliency, and maintenance.

The four layers covered in this implementation guide are:

•Aggregation Layer

•Services Layer

•Access Layer/Virtual Access Layer

•Compute Layer

This guide also includes some sample implementation details for the following additional layers:

•Core Layer

•Management Layer

•Storage Layer

In addition to the layered datacenter design, the Cisco VMDC 2.1 network implementation is done with the following key operational parameters in mind.

High Availability through:

•Device redundancy

•Link redundancy

•Path redundancy

Performance and Scalability through:

•N×10 Gigabit Ethernet Switching Infrastructure

•vPC (Virtual Port Channels) and MEC (Multi-Chassis EtherChannels)

•Fast convergence

Service Assurance through:

•QoS classification and marking

•Traffic flow matching

•Bandwidth guarantees

•Rate limits

Tenant Separation at each network layer:

•Core Layer: Virtual Routing and Forwarding (VRF)

•Aggregation Layer: VRF, VLAN

•Services Layer: VRF, VLAN, and Virtual Device Contexts

•Access Layer: VLAN

•Virtual Access Layer: VLAN

Physical Topology

The Cisco VMDC 2.1 network infrastructure deployment models the standard Cisco three-tier hierarchical architecture model (core, aggregation, access). This data center network design is based on a proven layered approach, which has been tested and improved over the past several years in some of the largest data center implementations in the world. The layered approach is the basic foundation of the data center design that seeks to improve scalability, performance, flexibility, resiliency, and maintenance. Figure 2-1 illustrates the overall Cisco VMDC 2.1 physical topology.

Figure 2-1 Cisco VMDC 2.1 Physical Topology

Logical Topology

Logical topologies are bound to network protocols and describe how data is moved across the network. The Cisco VMDC 2.1 basic tenant concept is modeled after a simple datacenter structure containing a public/common server farm and a secure/private server farm.

Figure 2-2 represents the Cisco VMDC 2.1 logical tenant topology that is created on the physical topology in Figure 2-1.

Figure 2-2 Cisco VMDC 2.1 Logical Topology

Tenant Model

The Cisco VMDC 2.1 tenant concept is modeled on a basic aggregation block containing both a common server farm and a secure server. Each tenant has an unprotected (public/common) zone and a protected (private/secure) zone.

Each Server Virtual Machine (VM) deployed within a tenant protected or unprotected zone is assumed to have three network interfaces (NICs).

•Front End - These interfaces are used for external access (HTTP, HTTPS, etc.) to the server or cluster, which can be accessed by application servers or users that are submitting jobs or retrieving job results from the cluster.

•Back End - These interfaces provide inter-compute node communications (clustering) and potentially a back-end high-speed storage path (NFS). Typical requirements include low latency and high bandwidth and may also include jumbo frame support.

VLAN Allocation

In Cisco VMDC 2.1, the tenant VLAN scheme is flexible for different tenants or different tenant zones. The goal of the design was to allocate VLANs for different purposes spanning different devices or layers in the architecture. The Cisco VMDC 2.1 model tenant VLAN allocation was done as follows:

•3 front-end (Public) VLANs

•1 or 2 backend (Private) VLANs

•1 VM management VLAN

Note Throughout this document, example configurations reference the VLANs in Table 2-1.

Table 2-1 Example Tenant 1 VLAN Scheme

Tenant

Zone

Device

Description

VLAN id

Tenant 1

Unprotected

Nexus 7000

Front End

211-213

Nexus 5000

Front End

Back End

VM Management

211-213

214-215

34

DSN - Catalyst 6500

FWSM Outside

ACE

212

211

UCS 6100 FI

Front End

Back End

VM Management

211-213

214-215

34

Nexus 1000v

Front End

Back End

VM Management

211-213

214-215

34

Protected

Nexus 7000

Front End

611-613

Nexus 5000

Front End

Back End

VM Management

611-613

614-615

34

DSN - Catalyst 6500

FWSM Inside

ACE

612

611

UCS 6100 FI

Front End

Back End

VM Management

611-613

614-615

34

Nexus 1000v

Front End

Back End

VM Management

611-613

614-615

34

IP Addressing

In Cisco VMDC 2.1, the tenant IP addressing scheme is flexible and can support public or private addressing. The VRF segmentation would also allow overlapping IP spaces for different tenants give the assumption that complete path isolation is provided end to end. Table 2-2 is an example of tenant IP address modeling done in a private address block. If a tenant is assigned a contiguous block of subnets within the datacenter, the routing may be summarized when it is advertised to the rest of the network. In addition, only a small number of static routes are needed to provide reachability between a tenant's unprotected and protected zones.

Note Throughout this document, example configurations reference the addresses in Table 2-2.

Table 2-2 Example Tenant 1 Address Scheme

/18

/19

/22

/23

/24

Description

Tenant 1- 10.1.0.0/18

Unprotected Zone - 10.1.0.0/19

VM Subnets

VM Subnets

10.1.1.0/24

Unprotected VM Subnet

10.1.2.0/24

Unprotected VM Subnet

VM Subnets

10.1.3.0/24

Unprotected VM Subnet

Reserved Unprotected VM Subnet

VM Subnets

Reserved

Reserved Unprotected VM Subnet

Reserved Unprotected VM Subnet

Reserved

Reserved Unprotected VM Subnet

Reserved Unprotected VM Subnet

ACE

ACE VIP

10.1.24.0/24

ACE Unprotected VIP Subnet

Reserved VIP Subnet

ACE SNAT

10.1.26.0/24

ACE SNAT Subnet

Reserved SNAT Subnet

Infrastructure

Infrastructure

10.1.28.0/24

Unprotected Infrastructure Subnet

Unprotected Infrastructure Subnet

Management

Reserved Management Subnet

10.1.31.0/24

Unprotected Loopback Subnet

Protected Zone - 10.1.32.0/19

VM Subnets

VM Subnets

10.1.41.0/24

Protected VM Subnet

10.1.42.0/24

Protected VM Subnet

VM Subnets

10.1.43.0/24

Protected VM Subnet

Reserved Protected VM Subnet

VM Subnets

Reserved

Reserved Protected VM Subnet

Reserved Protected VM Subnet

Reserved

Reserved Protected VM Subnet

Reserved Protected VM Subnet

ACE

ACE VIP

10.1.56.0/24

ACE Protected VIP Subnet

Reserved VIP Subnet

ACE SNAT

10.1.58.0/24

ACE SNAT Subnet

Reserved SNAT Subnet

Infrastructure

Infrastructure

10.1.60.0/24

Protected Infrastructure Subnet

Protected Infrastructure Subnet

Management

Reserved Management Subnet

10.1.63.0/24

Protected Loopback Subnet

Virtual Routing and Forwarding (VRF)

In Cisco VMDC 2.1, Layer 3 separation between tenants and between tenant zones is accomplished using Virtual Routing and Forwarding (VRF). VRF instances allow multiple routing configurations in a single Layer 3 switch using separate virtual routing tables. By default, communication between VRF instances is not allowed to protect the privacy of each tenant zone.

Each tenant is assigned two VRF instances, one that forms the unprotected (public) zone and another that creates protected (private) zone. Routing information is carried across all the hops in each tenant's Layer 3 domain, and each tenant's unprotected and protected VRF is mapped to one or more VLANs in a Layer 2 domain.

Figure 2-5 depicts a completed logical topology showing the unprotected and protected VRFs extending into the services layer and the server VLANs as they extend to the access layer and then continue throughout the rest of the Layer 2 domain.

Figure 2-5 Cisco VMDC 2.1 Logical Topology

Routing

In Cisco VMDC 2.1, dynamic routing for each tenant is accomplished using OSPF as the interior gateway protocol. The remainder of the routing information is provided via static routes which are redistributed into OSPF at the Autonomous System Border Router (ASBR).

Not-so-stubby areas (NSSAs) are an extension of OSPF stub areas. Stub areas prevent the flooding of external link-state advertisements (LSAs) into NSSAs, relying instead on default routes to external destinations. NSSAs are more flexible than stub areas in that a NSSA can import external routes into the OSPF routing domain.

If the FWSM context is deployed in routed mode (recommended as the most flexible option) the unprotected becomes a true NSSA with the connection to Area 0 and the protected OSPF area is almost effectively a totally NSSA area given there is no connection to area 0 and a default static route is used to exit to the unprotected zone. In Figure 2-6, two separate routing domains are connected via static routes on the FWSM.

Figure 2-6 Tenant Routing with FWSM in Routed Mode

If the FWSM context is deployed in transparent mode, the unprotected and protected interfaces form an OSPF adjacency. The OSPF NSSA is extended through the FWSM, which forms a single routing domain. In this case, all routing information will be populated in both tenant zones.

Figure 2-7 Tenant Routing with FWSM in Transparent Mode

Management Implementation

The Cisco VMDC 2.1 solution does not focus on a specific management network architecture. The Virtual Management Infrastructure (VMI) described in this section illustrates an example deployment of a management network and how it integrates into the overall VMDC architecture.

Virtual Management Infrastructure (VMI)

Virtual Management Infrastructure (VMI) is a network that hosts additional infrastructure that employs a variety of tools, applications, and additional devices to assist human network managers in monitoring and maintaining the overall Cisco VMDC 2.1 architecture.

The software applications may include, but are not limited to, the following list:

•Unified Computing System Manager (UCSM)

•Cisco Fabric Manager (FM)

•VMware vSphere

•BMC orchestration tools

Additional hardware, such as the Nexus 1010 and Network Analysis Modules (NAM), would be deployed in VMI.

Physical Topology

All VMDC infrastructure devices use the local management VRF and mgmt 0 interface to provide Out-of-Band (OoB) management connection to the VMI OoB management switch.

VMI uses a separate distribution layer (gateway routers) to provide routing functionality between VMI VLANs and connectivity to any internal or external networks.

VMI also employs an additional pair of Nexus 5000 switches to serve as a dedicated access layer for the compute resources contained within VMI. The VMI access switches are directly connected to the Cisco VMDC 2.1 infrastructure via the VMDC Nexus 5000 access switches.

The required management VLANs are extended from VMI through the access layer and into the virtual access and compute layers providing Layer 2 adjacent connectivity to all ESXi hosts as well as all server virtual machines residing on UCS.

Figure 2-8 Cisco VMI Physical Topology

vPC Implementation

A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 5000 Series devices to appear as a single port channel by a third device. The third device can be a switch, server, or any other networking device that supports port channels. A vPC provides Layer 2 multipathing, which allows you to create redundancy and increase bisectional bandwidth by enabling multiple parallel paths between nodes and allowing load balancing traffic.

Figure 2-10 shows the basic layered design.design in which the rest of Cisco VMDC 2.1 architecture is built on.

Figure 2-10 Basic Three-Tier Data Center Design

Core Layer

From an overall architecture perspective, provisioning a dedicated pair of data center core switches insulates the core from the remainder of the data center network to improve routing stability and provide a future scale point for the data center topology. If requirements dictate a scenario that needs two or more aggregation blocks, a dedicated data center core network provides ease of deployment for scale expansion with no additional equipment in the data center network.

The Cisco VMDC 2.1 solution does not focus on the core layer; however, some relevant configuration pieces are included in this guide as reference.

Aggregation Layer (Nexus 7000)

The aggregation layer provides a consolidation point where access layer switches are connected as well as delivers connectivity to the core and services layers of the data center. The aggregation layer provides the boundary between Layer 3 routed links and Layer 2 Ethernet broadcast domains as shown in Figure 2-11.

Figure 2-11 Aggregation Layer - Layer 3 and Layer 2 Boundaries

Nexus 7010 Module Details

The Cisco VMDC 2.1 solution was validated using the following Nexus 7010-compatible modules:

VDC Implementation

In the Nexus 7000 switches, the default VDC has unique abilities, including the ability to create up to three additional VDCs per switch (for a total of four VDCs including the default).

In Cisco VMDC 2.1, the default VDC is reserved for administrative functions and a single non-default VDC for production network connections. The VDC configurations appear in Example 2-5 and Example 2-6.

This approach improves flexibility and security. You may grant Administrative access into the non-default VDCs to perform configuration functions without exposing access to reload the switch or change software versions. There are no Layer 3 interfaces in the default VDC that are exposed to the production data network, and only the management interface is accessible through an out-of-band (OOB) management path. In this implementation, the default VDC is maintained as an administrative context that requires console access and/or separate security credentials.

Virtual PortChannel Implementation (vPC)

A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 7000 Series devices to appear as a single port channel by a third device. The third device can be a switch, server, or any other networking device that supports port channels. A vPC provides Layer 2 multipathing, which allows you to create redundancy and increase bisectional bandwidth by enabling multiple parallel paths between nodes and allowing load balancing traffic.

The vPC domain includes vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all the PortChannels in the vPC domain connected to the downstream device

Cisco VMDC 2.1 also leverages two new vPC features added to NX-OS that improve scale and performance during convergence events. These features are peer switch and address resolution protocol (ARP) synchronization.

The vPC peer switch feature addresses the performance of spanning tree protocol (STP) convergence. It allows a pair of Cisco Nexus 7000 Series devices to appear as a single STP root in the Layer 2 topology. This feature eliminates the need to pin the STP root to the vPC primary switch and improves vPC convergence during vPC primary switch failures. To avoid loops, the vPC peer link is excluded from the STP computation. In vPC peer switch mode, STP BPDUs are sent from both vPC peer devices to avoid issues related to STP BPDU timeout on the downstream switches, which can cause traffic disruption.

The ARP synchronization feature addresses table synchronization across vPC peers using the reliable transport mechanism of the Cisco Fabric Service over Ethernet (CFSoE) protocol. You must enable the IP ARP synchronize to support faster convergence of address tables between the vPC peers. This convergence is designed to overcome the delay involved in ARP table restoration for IPv4 table restoration when the peer-link port channel flaps or when a vPC peer comes back online.

The current best practice is to use as much information as possible for input to the EtherChannel algorithm to achieve the best or most uniform utilization of EtherChannel members.

Virtual Routing and Forwarding (VRF)

In the Cisco VMDC 2.1, the VRFs created in the aggregation layer provide Layer 3 separation between tenants and between tenant zones.

VRF Design depicts a logical representation of the zone separation for a single tenant.

Each tenant deploys with two VRFs, unprotected and protected. Routes propagate to all hops in a Layer 3 domain so the Services Layer is depicted for clarity (this concept is clarified in section 3.4.3.4). The tenant zone VRFs are then mapped to the VLANs where the virtual machines reside. By default, communications between the VRF instances is prevented to protect the privacy of each tenant. The same default behavior applies to communications between tenant zones.

Figure 2-14 VRF Design

The following configurations are needed to provision a tenant Unprotected Zone VRF:

•Use OSPF point-to-point mode on the 10G Ethernet links so the adjacency is always formed with the neighbor. There is no DR/BDR election in a point-to-point mode. This configuration gives the flexibility to configure separate OSPF cost per point-to-point neighbor.

•OSPF hello and hold timers are left at default values.

•Use OSPF manual link costs on Layer 3 port channels to prevent cost changes when member links fail or are added to the bundle.

•OSPF throttle timer tuning could be further tuned to achieve faster convergence.

•OSPF NSSA is used to allow importing of the static routes into the OSPF routing domain and also to limits the number of routes advertised from the aggregation layer to the services layer.

•If tenant subnetting allows, the OSPF area range command can be used to limit outbound prefix advertisements and potentially improve convergence.

Traffic Flow Optimization

The following traffic flow optimization deployment guidelines were identified:

Service Insertion with the Datacenter Services Node (DSN)

In Cisco VMDC 2.1, the firewall and load balancing services are deployed using the Cisco Datacenter Services Node (DSN). This approach decouples the service modules from dependence on a specific aggregation switch.

With VSS, the ACE and FWSM modules will be in active-active mode, with each virtual context in active-standby mode on the designated service modules of each Cisco DSN.

Figure 2-15 Service Layer within DSN

Catalyst 6500 Module Details

The Cisco VMDC 2.1 solution includes the following Catalyst 6500-compatible modules:

VSS Implementation

Virtual Switching System (VSS) combines two physical Cisco Catalyst 6500 Series Switches into one virtual switch.

Figure 2-16 Cisco Virtual Switching System

This configuration enables a unified control plane and allows both data planes to forward simultaneously. With VSS, the multi-chassis EtherChannel (MEC) capability is introduced, which allows a port channel to be defined across two physical switches.

Integrating VSS with Cisco DSN also increases the number of supported service modules per chassis from four to eight within a VSS domain, which enables an active-active highly available service chassis deployment.

Multi-Chassis EtherChannel (MEC)

For the Cisco VMDC 2.1 solution, the aggregation Nexus 7010 aggregation switches interconnect to the Cisco DSN through the MEC running in the VSS.

Figure 2-17 Multi-Channel EtherChannel Connections to DSN

Example 2-17 Catalyst 6500 DSN Configuration

interface Port-channel103

description L3 PC to EAST-DIST-A

no switchport

no ip address

logging event link-status

logging event bundle-status

load-interval 30

port-channel port hash-distribution adaptive

!

interface Port-channel104

description L3 PC to EAST-DIST-B

no switchport

no ip address

logging event link-status

logging event bundle-status

load-interval 30

port-channel port hash-distribution adaptive

!

interface TenGigabitEthernet1/2/1

description To DIST-N7010-A-EAST-DIST-A Eth 3/4

no switchport

no ip address

logging event bundle-status

load-interval 30

channel-group 103 mode active

!

interface TenGigabitEthernet1/3/1

description To DIST-N7010-B-EAST-DIST-B Eth 4/4

no switchport

no ip address

logging event bundle-status

load-interval 30

channel-group 104 mode active

!

interface TenGigabitEthernet2/2/1

description To DIST-N7010-A-EAST-DIST-A Eth 4/4

no switchport

no ip address

logging event bundle-status

load-interval 30

channel-group 103 mode active

!

interface TenGigabitEthernet2/3/1

description To DIST-N7010-B-EAST-DIST-B Eth 3/4

no switchport

no ip address

logging event bundle-status

load-interval 30

channel-group 104 mode active

Example 2-18 Nexus 7010 Configuration

! EAST-DIST-A

interface port-channel103

description L3 link to EAST-VSS

no lacp graceful-convergence

!

interface Ethernet3/4

description To EAST-VSS-A Ten1/2/1

channel-group 103 mode active

no shutdown

!

interface Ethernet4/4

description To EAST-VSS-A Ten2/2/1

channel-group 103 mode active

no shutdown

!

! EAST-DIST-B

!

interface port-channel104

description L3 link to EAST-VSS

no lacp graceful-convergence

!

interface Ethernet3/4

description To EAST-VSS-A Ten2/3/1

channel-group 104 mode active

no shutdown

!

interface Ethernet4/4

description To EAST-VSS-A Ten1/3/1

channel-group 104 mode active

no shutdown

Virtual Routing and Forwarding (VRF)

IN Cisco VMDC 2.1, the DSN uses a dual-homed routed approach for data path connectivity to redundant aggregation layer switches. The FWSM, and ACE modules operate in routed mode. Each tenant is deployed with two VRFs: unprotected and protected. Routes propagate to all hops in a Layer 3 domain, and the VLANs used by the FW and ACE service modules are then mapped to the unprotected and protected VRFs. By default, communications between VRF instances are prevented to protect the privacy of each tenant as well as between tenant zones.

Application Control Engine (ACE)

In the VMDC 2.1 solution, the Cisco ACE modules provide the following features:

•Virtualization (context and resource allocation)

•Redundancy (active-active context failover)

•Load balancing (protocols, stickiness)

•Source NAT (static and dynamic NAT)

The initial step to deploy the Cisco ACE module in the Cisco VMDC 2.1 network is to allocate the VLANs that the module will use from the DSN. The svclc vlan-group command is used to allocate VLANs to VLAN groups and to apply the VLAN groups to the Cisco ACE module use the svclc switch command.

Firewall Services Module (FWSM)

In the Cisco VMDC 2.1 solution, the Cisco FWSM provides the following features:

•Virtualization (context and resource allocation)

•Redundancy (active-active context failover)

•Security and inspection

•URL filtering

•Protocol inspection

The Cisco FWSM is deployed similar to the ACE module by allocating the VLANs that the module uses from the DSN. The svclc vlan-group command assigns VLANs to VLAN groups. To apply the VLAN groups to the Cisco FWSM module, use the firewall switch command.

Services Deployment Guidelines

VSS

•It is important to size the VSS VSL accordingly. The total bandwidth of the VSL should be equal to the total amount of uplink traffic coming into a single chassis.

Port-Channel

•On the Catalyst 6500 DSN configure the adaptive port channel hash-distribution algorithm. This configuration optimizes the behavior of the port ASICs of member ports upon the failure of a single member.

•By default on the Nexus 7000, LACP graceful convergence is enabled. It should be disabled when connecting to the Catalyst 6500 DSN as the graceful failover defaults may delay the time taken for a disabled port to be brought down or cause traffic from the peer to be lost.

OSPF

•Use OSPF point-to-point mode on the 10G Ethernet links so the adjacency is always formed with the neighbor. There is no DR/BDR election in a point-to-point mode. This configuration gives the flexibility to configure separate OSPF cost per point-to-point neighbor.

•OSPF hello and hold timers are left at default values.

•Use OSPF manual link costs on Layer 3 port channels to prevent cost changes when member links fail or are added to the bundle.

•OSPF throttle timer tuning could be further tuned to achieve faster convergence.

•OSPF NSSA is used to allow importing of the static routes into the OSPF routing domain and also to limits the number of routes advertised from the aggregation layer to the services layer.

FWSM

•Routed mode allows the most deployment flexibility.

•prune VLANs

ACE

•prune VLANs

Traffic Flow Optimization

•To optimize traffic flows within the DSN, keep all active ACE and FWSM contexts for a single tenant on the same DSN chassis. This limits the inter-module traffic required to traverse the VSL.

Access Deployment Guidelines

Layer 2 Configuration

•Set MAC address aging time consistent to aggregation layer Nexus 7000. The Nexus 7000 default aging time is 1800 seconds, and the Nexus 5000 default aging time is 300 seconds.

Compute Layer Hardware (UCS 6100 FI)

Figure 2-20 Compute Layer

End Host Mode

End host mode allows the fabric interconnect to act as an end host to the network, representing all server (hosts) connected to it using vNICs. This connection is defined by pinning (dynamically or hard pinned) vNICs to uplink ports, which provides redundancy toward the network and presents the uplink ports as server ports to the rest of the fabric.

When in end-host mode, the fabric interconnect does not run STP and avoids loops by preventing uplink ports from forwarding traffic to each other and by denying egress server traffic on more than one uplink port at a time. End host mode is the default Ethernet switching mode, and it should be used if Layer 2 switching is used for upstream aggregation.

Figure 2-21 End Host Mode in UCSM

M81KR vNIC Allocation

The UCS M81KR adapter is unique for its ability to create multiple vNICs. In Cisco VMDC 2.1 10 VNICs are presented to ESXi.

Virtual Access Layer (Nexus 1000v)

The VSM is used to configure, manage, monitor, and diagnose issues for the Cisco Nexus 1000v Series system (VSM and all controlled VEMs). In Cisco VMDC 2.1, the Nexus 1010 appliance hosts the VSM.

The Nexus 1000v VEM is a software component that runs inside each hypervisor (ESXi host). It enables advanced networking and security features, switches between directly attached virtual machines, and provides uplink capabilities to the rest of the network.

The chosen deployment scenario uses the two lights-out management (LOM) interfaces for management traffic, and the four interfaces on the PCI card carry control, packet, and data traffic. This option is ideal for deploying additional virtual service blades, such as a Network Analysis Module (NAM).

In this configuration, the two management interfaces connect to two separate upstream switches for redundancy. In addition, the four ports used for control, packet, and data traffic should be divided between two upstream switches for redundancy. Since control traffic is minimal, most of the bandwidth from the four Gigabit Ethernet interfaces is used for NAM traffic.

The VMI management network Nexus 5000 uses the FEX straight-through configuration as shown in Figure 2-24, which provides support for Host Port channels needed by the Nexus 1010.

VSM High Availability

Not all virtual services blades are active on the active Cisco Nexus 1010. As long as the active and standby Cisco Nexus 1010 appliances are connected, access through a serial connection is maintained to any virtual service. When one Cisco Nexus 1010 fails, the remaining Cisco Nexus 1010 becomes active and all virtual services in the standby state on that Cisco Nexus 1010 become active on their own.

For more information about VSM high availability, see the Cisco Nexus 1000v High Availability and Redundancy Configuration Guide.

Nexus 1000v Uplink Implementation

As mentioned in M81KR vNIC Allocation. the M81KR adapter presents 10 vNICS to the ESXi host and ultimately to the Nexus 1000v. The port-channel uplinks are implemented so each carries a specific type of traffic. The traffic types are broken down according to function as listed below:

The Nexus 5000 access layer switches are implemented using vPC which allows the Nexus 1000v to use all the available links. The uplinks on the Nexus 1000v are implemented as standard PortChannels. A standard PortChannel on the Cisco Nexus 1000V Series behaves like an EtherChannel on other Cisco switches and supports LACP. Standard PortChannels require that all uplinks in the PortChannel be in the same EtherChannel on the upstream switches.

Example 2-40 Nexus 1000v Uplink Port Channel Configuration

! PORT PROFILES

!

port-profile type ethernet App-BackEnd-uplink

vmware port-group

port-binding static

switchport mode trunk

switchport trunk allowed vlan 214-215,614-615

channel-group auto mode on mac-pinning

no shutdown

state enabled

!

port-profile type ethernet App-FrontEnd-uplink

vmware port-group

port-binding static

switchport mode trunk

switchport trunk allowed vlan 211-213,611-613

channel-group auto mode on mac-pinning

no shutdown

state enabled

!

port-profile type ethernet Ctrl-Pkt-NFS-uplink

vmware port-group

port-binding static

switchport mode trunk

switchport trunk allowed vlan 99,193-194

channel-group auto mode on mac-pinning

no shutdown

system vlan 193

state enabled

!

port-profile type ethernet Mgmt-uplink

vmware port-group

port-binding static

switchport mode trunk

switchport trunk allowed vlan 32-48,52,56,60

channel-group auto mode on mac-pinning

no shutdown

system vlan 33-34

state enabled

!

port-profile type ethernet Vmotion-uplink

vmware port-group

port-binding static

switchport mode trunk

switchport trunk allowed vlan 50

channel-group auto mode on mac-pinning

no shutdown

state enabled

!

! PORT-CHANNEL INTERFACES

!

interface port-channel1

inherit port-profile Ctrl-Pkt-NFS-uplink

!

interface port-channel2

inherit port-profile Vmotion-uplink

!

interface port-channel3

inherit port-profile Mgmt-uplink

!

interface port-channel4

inherit port-profile App-FrontEnd-uplink

!

interface port-channel5

inherit port-profile App-BackEnd-uplink

!

! ETHERNET INTERFACES

!

interface Ethernet3/1

inherit port-profile Ctrl-Pkt-NFS-uplink

no shutdown

!

interface Ethernet3/2

inherit port-profile Ctrl-Pkt-NFS-uplink

no shutdown

!

interface Ethernet3/3

inherit port-profile Vmotion-uplink

no shutdown

!

interface Ethernet3/4

inherit port-profile Vmotion-uplink

no shutdown

!

interface Ethernet3/5

inherit port-profile Mgmt-uplink

no shutdown

!

interface Ethernet3/6

inherit port-profile Mgmt-uplink

no shutdown

!

interface Ethernet3/7

inherit port-profile App-FrontEnd-uplink

no shutdown

!

interface Ethernet3/8

inherit port-profile App-FrontEnd-uplink

no shutdown

!

interface Ethernet3/9

inherit port-profile App-BackEnd-uplink

no shutdown

!

interface Ethernet3/10

inherit port-profile App-BackEnd-uplink

no shutdown

MAC Pinning

The default hashing algorithm used by the Cisco Nexus 1000V Series is source MAC address hashing (a source-based hash). Source-based hashing algorithms help ensure that a MAC address is transmitted down only a single link in the PortChannel, regardless of the number of links in a PortChannel.

With source-based hashing, a MAC address can move between interfaces under the following conditions:

Static pinning allows pinning of the virtual ports behind a VEM to a particular subgroup within the channel. Instead of allowing round robin dynamic assignment between the subgroups, you can assign (or pin) a static vEthernet interface, control VLAN, or packet VLAN to a specific port channel subgroup. With static pinning, traffic is forwarded only through the member ports in the specified subgroup.

Assign the Cisco Nexus 1000v "vEthernet" port-profile to the correct "sub-group-id #" by using the pinning id # command. VEthernet interfaces are present as VMware ESX port-groups that virtual machine interfaces can be assigned or attached to.

Example 2-43 Nexus 1000v vEthernet Port Profiles

port-profile type vethernet MGMT34

vmware port-group

switchport mode access

switchport access vlan 34

pinning id 6

service-policy type qos input mgmt

no shutdown

max-ports 1024

description Management Network - EAST Spirent Virtual Machines

state enabled

!

port-profile type vethernet Vmotion

vmware port-group

switchport mode access

switchport access vlan 50

pinning id 3

no shutdown

system vlan 50

max-ports 64

state enabled

Virtual Access Deployment Guidelines

Layer 2 Configuration

•Set MAC address aging time consistent to aggregation layer Nexus 7000 and Nexus 5000. The Nexus 7000 default aging time is 1800 seconds, and the Nexus 1000v default aging time is 300 seconds.

•Redundant VSMs should be created on the Cisco Nexus 1010 pair with the Cisco Nexus 1000V Series software image.

Storage Layer

Cisco VMDC 2.1 supports storage area network (SAN) or network-attached storage (NAS) storage options depending on the overall datacenter requirements. The following sections describe how each storage type was implemented and shows VM Datastores in use on each platform.

SAN

A SAN is a dedicated storage network that provides access to consolidated, block level storage. SANs primarily are used to make storage devices (such as disk arrays, tape libraries, and optical jukeboxes) accessible to servers so that the devices appear as locally attached to the operating system.

Cisco VMDC 2.1 utilizes a dual fabric, core-edge SAN design. The design provides redundancy at key failure points to ensure reliable end-to-end connectivity for both the hosts and storage array.

Figure 2-26 SAN dual fabric implementation

To ensure data separation, scalability, and future expansion, as well as high availability and redundancy at key points of failure, the following software features were enabled in Cisco VMDC 2.1:

•VSANs-General data separation

•IVR - Inter-VSAN Routing (IVR)

•Zone/Zoneset-Granular data separation

•NPV/NPIV-Host end scalability

Some additional details are provided around the following implementations:

•UCSM WWNN/WWPN Pools

•UCS Boot From SAN

•Virtual Machine Datastore

VSAN

Virtual SANs (VSANs) improve storage area network (SAN) scalability, availability, and security by allowing multiple Fibre Channel SANs to share a common physical infrastructure of switches and ISLs. These benefits are derived from the separation of Fibre Channel services in each VSAN and the isolation of traffic between VSANs. Data traffic isolation between the VSANs also inherently prevents sharing of resources attached to a VSAN, such as robotic tape libraries. Unlike a typical fabric that is resized switch-by-switch, a VSAN can be resized port-by-port.

The following configurations show the Cisco VMDC 2.1 MDS 9513 VSAN configuration:

Example 2-44 MDS 9513 VSAN configuration

!9513A

vsan database

vsan 100 name "VMDC21"

vsan 500 name "EMC"

!

vsan database

vsan 100 interface fc5/1

vsan 100 interface fc5/7

vsan 500 interface fc5/13

vsan 500 interface fc5/14

vsan 100 interface fc6/1

vsan 100 interface fc6/7

vsan 500 interface fc6/13

vsan 500 interface fc6/14

!

!9513B

vsan database

vsan 101 name "VMDC21"

vsan 501 name "EMC"

vsan database

vsan 101 interface fc5/1

vsan 101 interface fc5/7

vsan 501 interface fc5/13

vsan 501 interface fc5/14

vsan 101 interface fc6/1

vsan 101 interface fc6/7

vsan 501 interface fc6/13

vsan 501 interface fc6/14

Inter VSAN Routing (IVR)

Cisco VMDC 2.1 uses IVR to route between the VSAN defined for the datacenter and the VSAN assigned to the EMC storage array. The following features were configured on the MDS 9513 SAN switches:

•IVR Distribution. The IVR feature uses the Cisco Fabric Services (CFS) infrastructure to enable efficient configuration management and to provide a single point of configuration for the entire fabric in the VSAN.

•IVR Network Address Translation (NAT). This IVR feature can be enabled to allow non-unique domain IDs; however, without NAT, IVR requires unique domain IDs for all switches in the fabric. IVR NAT simplifies the deployment of IVR in an existing fabric where non-unique domain IDs might be present.

The MDS 9513 configurations below show the following IVR feature configurations for a single blade server in the Cisco VMDC 2.1 construct:

•IVR Distribution

•IVR NAT

•IVR Auto Topology

•Active Zonesets

Example 2-45 MDS 9513 IVR Configuration

!9513A

!

feature ivr

ivr nat

ivr distribute

ivr vsan-topology auto

zone mode enhanced vsan 100

zone mode enhanced vsan 500

!

!Example for a single blade server

!

device-alias database

device-alias name EMC-7fA pwwn 50:00:09:72:08:1f:3d:58

device-alias name EMC-8fA pwwn 50:00:09:72:08:1f:3d:5c

device-alias name EMC-9fA pwwn 50:00:09:72:08:1f:3d:60

device-alias name EMC-10fA pwwn 50:00:09:72:08:1f:3d:64

device-alias name EAST-C1B1 pwwn 20:00:00:25:b5:00:01:14

!

fcdomain distribute

fcdomain fcid database

vsan 100 wwn 20:00:00:25:b5:00:01:14 fcid 0x010031 dynamic

! [EAST-C1B1]

vsan 500 wwn 50:00:09:72:08:1f:3d:58 fcid 0x610000 dynamic

! [EMC-7fA]

vsan 500 wwn 50:00:09:72:08:1f:3d:5c fcid 0x610001 dynamic

! [EMC-8fA]

vsan 500 wwn 50:00:09:72:08:1f:3d:60 fcid 0x610003 dynamic

! [EMC-9fA]

vsan 500 wwn 50:00:09:72:08:1f:3d:64 fcid 0x610002 dynamic

! [EMC-10fA]

!

!Active Zone Database Section for vsan 100

zone name IVRZ_EAST-C1B1_to_VMAX1999 vsan 100

member pwwn 20:00:00:25:b5:00:01:14

! [EAST-C1B1]

member pwwn 50:00:09:72:08:1f:3d:58

! [EMC-7fA]

member pwwn 50:00:09:72:08:1f:3d:5c

! [EMC-8fA]

member pwwn 50:00:09:72:08:1f:3d:60

! [EMC-9fA]

member pwwn 50:00:09:72:08:1f:3d:64

! [EMC-10fA]

!

zoneset name nozoneset vsan 100

member IVRZ_EAST-C1B1_to_VMAX1999

!

ivr zone name EAST-C1B1_to_VMAX1999

member pwwn 20:00:00:25:b5:00:01:14 vsan 100

! [EAST-C1B1]

member pwwn 50:00:09:72:08:1f:3d:64 vsan 500

! [EMC-10fA]

member pwwn 50:00:09:72:08:1f:3d:58 vsan 500

! [EMC-7fA]

member pwwn 50:00:09:72:08:1f:3d:5c vsan 500

! [EMC-8fA]

member pwwn 50:00:09:72:08:1f:3d:60 vsan 500

! [EMC-9fA]

!

ivr zoneset name dcpod_fab_a_ivr

member EAST-C1B1_to_VMAX1999

!

! show command output

SAN-M9513-A# sho ivr

Inter-VSAN Routing is enabled

Inter-VSAN enabled switches

---------------------------

AFID VSAN DOMAIN CAPABILITY SWITCH WWN

-------------------------------------------------------------------

1 1 0xa4(164) 0000001f 20:00:00:0d:ec:3b:b6:40 *

1 100 0x 1( 1) 0000001f 20:00:00:0d:ec:3b:b6:40 *

1 200 0x 2( 2) 0000001f 20:00:00:0d:ec:3b:b6:40 *

1 500 0x61( 97) 0000001f 20:00:00:0d:ec:3b:b6:40 *

Total: 4 IVR-enabled VSAN-Domain pairs

Inter-VSAN topology status

--------------------------

Current Status: Inter-VSAN topology is ACTIVE, AUTO Mode

Last activation time: Fri Dec 10 21:40:35 2010

Inter-VSAN zoneset status

-------------------------

name : dcpod_fab_a_ivr

state : activation success

last activate time : Fri Feb 11 19:49:04 2011

Fabric distribution status

-----------------------

fabric distribution enabled

Last Action Time Stamp : Fri Feb 11 19:48:48 2011

Last Action : Commit

Last Action Result : Success

Last Action Failure Reason : none

Inter-VSAN NAT mode status

--------------------------

FCID-NAT is enabled

Last activation time : Mon Dec 6 16:43:40 2010

AAM mode status

--------------------------

AAM is disabled

License status

-----------------

IVR is running based on the following license(s)

ENTERPRISE_PKG

Sharing of tcam space across xE ports disabled

SAN-M9513-A# show ivr zoneset active

zone name EAST-C1B1_to_VMAX1999

* pwwn 20:00:00:25:b5:00:01:14 vsan 100 autonomous-fabric-id 1

[EAST-C1B1]

pwwn 50:00:09:72:08:1f:3d:64 vsan 500 autonomous-fabric-id 1

[EMC-10fA]

* pwwn 50:00:09:72:08:1f:3d:58 vsan 500 autonomous-fabric-id 1

[EMC-7fA]

pwwn 50:00:09:72:08:1f:3d:5c vsan 500 autonomous-fabric-id 1

[EMC-8fA]

* pwwn 50:00:09:72:08:1f:3d:60 vsan 500 autonomous-fabric-id 1

[EMC-9fA]

!

!9513B

!

feature ivr

ivr nat

ivr distribute

ivr vsan-topology auto

zone mode enhanced vsan 101

zone mode enhanced vsan 501

!

!Example for a single blade server

!

device-alias database

device-alias name EMC-7fB pwwn 50:00:09:72:08:1f:3d:59

device-alias name EMC-8fB pwwn 50:00:09:72:08:1f:3d:5d

device-alias name EMC-9fB pwwn 50:00:09:72:08:1f:3d:61

device-alias name EMC-10fB pwwn 50:00:09:72:08:1f:3d:65

device-alias name EAST-C1B1 pwwn 20:00:00:25:b5:00:02:14

!

fcdomain distribute

fcdomain fcid database

vsan 501 wwn 50:00:09:72:08:1f:3d:59 fcid 0x1c0000 dynamic

! [EMC-7fB]

vsan 501 wwn 50:00:09:72:08:1f:3d:61 fcid 0x1c0001 dynamic

! [EMC-9fB]

vsan 501 wwn 50:00:09:72:08:1f:3d:5d fcid 0x1c0002 dynamic

! [EMC-8fB]

vsan 501 wwn 50:00:09:72:08:1f:3d:65 fcid 0x1c0003 dynamic

! [EMC-10fB]

vsan 101 wwn 20:00:00:25:b5:00:02:14 fcid 0x1f0030 dynamic

! [EAST-C1B1]

!

!Active Zone Database Section for vsan 101

zone name IVRZ_EAST-C1B1_to_VMAX1999 vsan 101

member pwwn 20:00:00:25:b5:00:02:14

! [EAST-C1B1]

member pwwn 50:00:09:72:08:1f:3d:59

! [EMC-7fB]

member pwwn 50:00:09:72:08:1f:3d:5d

! [EMC-8fB]

member pwwn 50:00:09:72:08:1f:3d:61

! [EMC-9fB]

member pwwn 50:00:09:72:08:1f:3d:65

! [EMC-10fB]

!

zoneset name nozoneset vsan 101

member IVRZ_EAST-C1B1_to_VMAX1999

!

ivr zone name EAST-C1B1_to_VMAX1999

member pwwn 50:00:09:72:08:1f:3d:65 vsan 501

! [EMC-10fB]

member pwwn 50:00:09:72:08:1f:3d:59 vsan 501

! [EMC-7fB]

member pwwn 50:00:09:72:08:1f:3d:5d vsan 501

! [EMC-8fB]

member pwwn 50:00:09:72:08:1f:3d:61 vsan 501

! [EMC-9fB]

member pwwn 20:00:00:25:b5:00:02:14 vsan 101

! [EAST-C1B1]

!

ivr zoneset name dcpod_fab_b_ivr

member EAST-C1B1_to_VMAX1999

!

! show command output

!

SAN-M9513-B# sho ivr

Inter-VSAN Routing is enabled

Inter-VSAN enabled switches

---------------------------

AFID VSAN DOMAIN CAPABILITY SWITCH WWN

-------------------------------------------------------------------

1 1 0x d( 13) 0000001f 20:00:00:0d:ec:2d:0e:40 *

1 101 0x1f( 31) 0000001f 20:00:00:0d:ec:2d:0e:40 *

1 201 0x21( 33) 0000001f 20:00:00:0d:ec:2d:0e:40 *

1 501 0x1c( 28) 0000001f 20:00:00:0d:ec:2d:0e:40 *

Total: 4 IVR-enabled VSAN-Domain pairs

Inter-VSAN topology status

--------------------------

Current Status: Inter-VSAN topology is ACTIVE, AUTO Mode

Last activation time: Tue Dec 14 17:02:00 2010

Inter-VSAN zoneset status

-------------------------

name : dcpod_fab_b_ivr

state : activation success

last activate time : Fri Feb 11 20:01:12 2011

Fabric distribution status

-----------------------

fabric distribution enabled

Last Action Time Stamp : Fri Feb 11 20:00:55 2011

Last Action : Commit

Last Action Result : Success

Last Action Failure Reason : none

Inter-VSAN NAT mode status

--------------------------

FCID-NAT is enabled

Last activation time : Tue Dec 14 17:02:00 2010

AAM mode status

--------------------------

AAM is disabled

License status

-----------------

IVR is running based on the following license(s)

ENTERPRISE_PKG

Sharing of tcam space across xE ports disabled

SAN-M9513-B# sho ivr zoneset active

zone name EAST-C1B1_to_VMAX1999

pwwn 50:00:09:72:08:1f:3d:65 vsan 501 autonomous-fabric-id 1

[EMC-10fB]

* pwwn 50:00:09:72:08:1f:3d:59 vsan 501 autonomous-fabric-id 1

[EMC-7fB]

pwwn 50:00:09:72:08:1f:3d:5d vsan 501 autonomous-fabric-id 1

[EMC-8fB]

* pwwn 50:00:09:72:08:1f:3d:61 vsan 501 autonomous-fabric-id 1

[EMC-9fB]

* pwwn 20:00:00:25:b5:00:02:14 vsan 101 autonomous-fabric-id 1

The configuration can also be seen through either the Fabric Manager or Datacenter Network Manager applications.

Figure 2-28 Fabric Manager (FM) IVR Fabric A implementation

Figure 2-29 Fabric Manager (FM) IVR Fabric B implementation

NPIV/NPV

NPIV allows a Fibre Channel host connection or N-Port to be assigned multiple N-Port IDs or Fibre Channel IDs (FCIDs) over a single link. All FCIDs assigned are managed on a Fibre Channel fabric as unique entities on the same physical host. Different applications can be used in conjunction with NPIV. In a virtual machine environment where many host operating systems or applications are running on a physical host, each virtual machine can now be managed independently from zoning, aliasing, and security perspectives. In a Cisco VMDC 2.1 which uses the Cisco MDS 9513environment, each host connection can log in as a single virtual SAN (VSAN).

Example 2-46 MDS 9513 NPIV Configuration

!9513A

SAN-M9513-A# sho run | inc npiv

feature npiv

!

SAN-M9513-A# show npiv status

NPIV is enabled

!

!9513B

SAN-M9513-B# sho run | inc npiv

feature npiv

!

SAN-M9513-B# show npiv status

NPIV is enabled

An extension to NPIV, the N-Port Virtualizer (NPV) feature allows the UCS 6140 Fabric Interconnect device to behave as an NPIV-based host bus adapter (HBA) to the core Fibre Channel director MDS 9513. The device aggregates the locally connected host ports or N-Ports into one or more uplinks (pseudo-interswitch links) to the core switches. The only requirement of the core director is that it supports the NPIV feature.

Example 2-47 UCS 6140 NPV Configuration

!6140A

EAST-U6140-A(nxos)# show run | in npv|npiv

feature npv

npv enable

feature npiv

!

EAST-U6140-A(nxos)# show npv status

npiv is enabled

disruptive load balancing is disabled

External Interfaces:

====================

Interface: fc2/1, VSAN: 100, FCID: 0x010000, State: Up

Interface: fc2/3, VSAN: 100, FCID: 0x010001, State: Up

Interface: fc3/1, VSAN: 100, FCID: 0x010002, State: Up

Interface: fc3/3, VSAN: 100, FCID: 0x010003, State: Up

Number of External Interfaces: 4

Server Interfaces:

==================

Interface: vfc1287, VSAN: 100, State: Up

Interface: vfc1297, VSAN: 100, State: Up

Interface: vfc1307, VSAN: 100, State: Up

Interface: vfc1317, VSAN: 100, State: Up

Interface: vfc1327, VSAN: 100, State: Up

Interface: vfc1337, VSAN: 100, State: Up

Interface: vfc1347, VSAN: 100, State: Up

Interface: vfc1367, VSAN: 100, State: Up

Interface: vfc1387, VSAN: 100, State: Up

Interface: vfc1407, VSAN: 100, State: Up

Interface: vfc1417, VSAN: 100, State: Up

Interface: vfc1427, VSAN: 100, State: Up

Interface: vfc1437, VSAN: 100, State: Up

Interface: vfc1447, VSAN: 100, State: Up

Interface: vfc1457, VSAN: 100, State: Up

Interface: vfc1467, VSAN: 100, State: Up

Interface: vfc1477, VSAN: 100, State: Up

Interface: vfc1487, VSAN: 100, State: Up

Interface: vfc1497, VSAN: 100, State: Up

Interface: vfc1507, VSAN: 100, State: Up

Interface: vfc1537, VSAN: 100, State: Up

Interface: vfc1547, VSAN: 100, State: Up

Interface: vfc1612, VSAN: 100, State: Up

Interface: vfc1636, VSAN: 100, State: Up

Interface: vfc1696, VSAN: 100, State: Up

Interface: vfc2467, VSAN: 100, State: Up

Number of Server Interfaces: 26

!6140B

EAST-U6140-B(nxos)# show run | in npv|npiv

feature npv

npv enable

feature npiv

EAST-U6140-B(nxos)# show npv status

npiv is enabled

disruptive load balancing is disabled

External Interfaces:

====================

Interface: fc2/1, VSAN: 101, FCID: 0x1f0000, State: Up

Interface: fc2/2, VSAN: 101, FCID: 0x1f0003, State: Up

Interface: fc2/3, VSAN: 101, FCID: 0x1f0001, State: Up

Interface: fc2/4, VSAN: 101, FCID: 0x1f0002, State: Up

Number of External Interfaces: 4

Server Interfaces:

==================

Interface: vfc1288, VSAN: 101, State: Up

Interface: vfc1298, VSAN: 101, State: Up

Interface: vfc1308, VSAN: 101, State: Up

Interface: vfc1318, VSAN: 101, State: Up

Interface: vfc1328, VSAN: 101, State: Up

Interface: vfc1338, VSAN: 101, State: Up

Interface: vfc1348, VSAN: 101, State: Up

Interface: vfc1368, VSAN: 101, State: Up

Interface: vfc1388, VSAN: 101, State: Up

Interface: vfc1408, VSAN: 101, State: Up

Interface: vfc1418, VSAN: 101, State: Up

Interface: vfc1428, VSAN: 101, State: Up

Interface: vfc1438, VSAN: 101, State: Up

Interface: vfc1448, VSAN: 101, State: Up

Interface: vfc1458, VSAN: 101, State: Up

Interface: vfc1468, VSAN: 101, State: Up

Interface: vfc1478, VSAN: 101, State: Up

Interface: vfc1488, VSAN: 101, State: Up

Interface: vfc1498, VSAN: 101, State: Up

Interface: vfc1508, VSAN: 101, State: Up

Interface: vfc1538, VSAN: 101, State: Up

Interface: vfc1548, VSAN: 101, State: Up

Interface: vfc1613, VSAN: 101, State: Up

Interface: vfc1637, VSAN: 101, State: Up

Interface: vfc1697, VSAN: 101, State: Up

Interface: vfc2468, VSAN: 101, State: Up

Number of Server Interfaces: 26

UCSM WWNN/WWPN Pools

UCSM allows the server administrators the option of assigning the WWNN/WWPN manually for each B200 blade server or to create a pool that will be used to dynamically assign pre-defined WWNN/WWPN addresses. The Cisco VMDC 2.1 SAN implementation utilizes dynamic WWNN/WWPN pools as illustrated in Figure 2-30.

Figure 2-30 UCSM WWNN/WWPN Pool Definition

UCS Boot from SAN

In Cisco VMDC 2.1, each of the UCS B-200 blade servers is configured to boot VMWare ESXi from a small boot LUN (5GB) as the primary option.

Figure 2-31 UCS Boot Order Configuration in UCSM

In VMWare VCenter the ESXi host boot partition can be seen once the host is online.

Figure 2-32 VMware VCenter ESXi Disk Configuration

And if the configuration is further expanded the dual fabric paths can be seen in VMWare VCenter.

Figure 2-33 VMware VCenter ESXi showing SAN Fabric A and B Paths

Virtual Machine Datastore Configuration

The SAN provides only block-based storage and leaves file system concerns to the client (host) side. In Cisco VMDC 2.1 the tenant vSphere virtual machine files (.vmx, .vmk, snapshots, etc.) which are stored on the SAN (Block Device) need a VMFS formatted datastore. The VMFS formatting is done through VMWare once the disk is accessible. The following diagrams illustrate a few details around the datastore configured for Tenant 1.

Figure 2-34 VMware VCenter ESXi showing Virtual Machine Datastore

Figure 2-35 VMware VCenter Tenant 1 VM DataStore Details

And if the configuration is further expanded the dual fabric paths can be seen in VMWare VCenter.

•Logical Device separation (EMC LUN Masking and Mapping) - LUN masking, in conjunction with SAN zoning, extends the security from the SAN to the internal storage array by creating a logical connection from the host pWWN to the LUN device through the FA ports.

•Storage Thin Provisioning (EMC Virtual Provisioning) - Thin provisioning the LUNs at the storage level enables efficient use of available space on the storage array and hot expansion of the storage array by simply adding data devices to the thin pool.

NAS

NAS, in contrast to SAN, provides both storage and a file system. NAS uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block.

To ensure data separation, scalability, and future expansion, as well as high availability and redundancy at key points of failure, the following software features were enabled in Cisco VMDC 2.1:

•10GE Path Redundancy (Cisco vPC and NetApp LACP Trunking)

•Virtual Filer Separation (NetApp vFiler) per Tenant

Some additional details are provided around the following implementations:

•Virtual Machine Datastore

VLAN and Virtual Adapter Configuration

In Cisco VMDC 2.1, a backend VLAN could be used to access the NAS device. There are several implementation scenarios that could be accomplished in the Cisco VMDC 2.1 topology.

•Common vFiler - This datastore can be used as a common device where all tenant VMs may be housed and booted from. Only VMWare would have access to this datastore and VLANs would not be exposed to any tenant devices.

•Per Tenant vFiler - These datastores are allocated on a per tenant basis. It can be accessed either with a separate virtual interface on VMWare used to boot VMs or present a vFiler which can be mapped directly through tenant VMs.

The NAS VLAN allocation was done as follows:

Note Throughout this document, example configurations reference the NAS VLANs used in Table 2-5.

Table 2-5 Backend NAS VLAN Allocation

Zone

Device

Description

VLAN id

IP addressing

VMDC

VMDC Nexus 5000

NetApp FAS6080

Common vFiler

99

10.0.99.0/24

Tenant 1 vFiler

(1500 Byte Ethernet)

214

192.168.1.0/24

Tenant 1 vFiler

(9000 Byte Ethernet)

215

192.168.100.0/24

The following figures show the VMWare Virtual Adapters assigned to each of the following VLANs.

vPC to NetApp

A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 5000 Series devices to appear as a single port channel by a third device. The third device in the Cisco VMDC 2.1 storage context is the NetApp FAS6080. A vPC provides Layer 2 multipathing, which allows you to create redundancy and increase bisectional bandwidth by enabling multiple parallel paths between nodes and allowing load balancing traffic.

Additional Technology Implementation

Jumbo MTU Implementation

A jumbo frame is basically anything bigger than 1522 bytes, with a common size of 9000 bytes, which is exactly six times the size of a standard Ethernet frame. With Ethernet headers, a 9k byte jumbo frame would be 9014-9022 bytes. This makes it large enough to encapsulate a standard NFS (network file system) data block of 8192 bytes, yet not large enough to exceed the 12,000 byte limit of Ethernet's error checking CRC (cyclic redundancy check) algorithm.

Large frames are commonly employed in large data transfers; in contrast, for interactive data flows, such as terminal connections, small packets are normally used. In Cisco VMDC 2.1, jumbo MTU uses targets applications such as:

•Server back-to-back communication (e.g., NFS transactions)

•Server clustering

•High-speed data backups

Figure 2-43 shows the physical links in the Cisco VMDC 2.1 solution that were configured to carry jumbo frames.

RedHat 5.5 Guest Operating System

To set jumbo frames on an interface in RHEL 5.5 enter the following command for each interface for which you want change the MTU while in super-user root access privilege mode:

ifconfig <interface#> mtu <number>

Example 2-57 RedHate Guest OS

Example: = ifconfig eth2 mtu 9000

ifconfig -l

eth2 Link encap:Ethernet HWaddr 00:50:56:85:00:08

inet addr:192.168.100.22 Bcast:192.168.100.255 Mask:255.255.255.0

inet6 addr: fe80::250:56ff:fe85:8/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

Jumbo Frame Deployment Guidelines

The following deployment guidelines were identified:

Nexus 7010

When configuring MTU on the Nexus 7000 series, follow these guidelines:

•Configure the system jumbo MTU size, which can be used to specify the MTU size for Layer 2 interfaces. Specify an even number between 1500 and 9216. If not configured, the system jumbo MTU size defaults to 9216 bytes.

•For Layer 3 interfaces, configure an MTU size that is between 576 and 9216 bytes.

•For Layer 2 interfaces, configure all Layer 2 interfaces to use either the default MTU size (1500 bytes) or the system jumbo MTU size (default size of 9216 bytes).

Nexus 5000

When configuring MTU on the Nexus 5000 series, follow these guidelines:

•The system jumbomtu command defines the maximum MTU size for the switch. However, jumbo MTU is only supported for system classes that have MTU configured.

•MTU is specified per system class. You cannot configure MTU on the interfaces.

•The system class MTU sets the MTU for all packets in the class. The system class MTU cannot be configured larger than the global jumbo MTU.

•The FCoE system class (for Fibre Channel and FCoE traffic) has a default MTU of 2240 bytes. This value cannot be modified.

Multicast Implementation

In Cisco VMDC 2.1, multicast is implemented in two different ways. In the unprotected zone, Layer 3 multicast (PIM) on the tenant VLANs provides multicast capability intra- and inter-VLAN and external to the rest of the network. In the protected zone, Layer 2 multicast (IGMP) on the VLANs supports only intra-VLAN multicast requirements.

The multicast deployment in Cisco VMDC 2.1 is structured around the following features and configurations at specific locations in the topology:

Core

•PIM (sparse mode)

•Anycast RP using MSDP

Unprotected Zone - Intra- and Inter-VLAN

•PIM (sparse mode) for Front End VLANs

•IGMP Querier deployed at Aggreggation and/or Access Layer for Back End VLANs

•Static RP

•IGMP Snooping

Protected Zone - Intra-VLAN only

•IGMP Querier deployed at Aggregation or Access for Front End VLANs and/or Access Layer for Back End VLANs

•IGMP Snooping

Figure 2-46 Multicast Deployment in Layers

As mentioned in Core Layer, the core design is not a focus of Cisco VMDC 2.1. This section is included for completeness to show an example deployment of a redundant multicast rendezvous point (RP) in the core.

Anycast RP is an implementation strategy that provides load sharing and redundancy in Protocol Independent Multicast sparse mode (PIM-SM) networks. Anycast RP allows two or more rendezvous points (RPs) to share the load for source registration and the ability to act as a backup for each other. Multicast Source Discovery Protocol (MSDP) makes Anycast RP possible.

Figure 2-47 PIM Flows and RPs

Example 2-58 and Example 2-59 present an example core configuration for MSDP Anycast RP and PIM SM that could be used in a typical Cisco VMDC 2.1 deployment.

When a network/VLAN does not have a router that can take on the multicast router role and provide the mrouter discovery on the switches, you can turn on the IGMP querier feature. This feature allows the Layer 2 switch to proxy for a multicast router and sends out periodic IGMP queries in that network. This action causes the switch to consider itself an mrouter port. The remaining switches in the network simply define their respective mrouter ports as the interface on which they received this IGMP query.

Access Layer (Nexus 5000)

The Nexus 5000 has IGMP snooping enabled by default. With IGMP snooping, the switch snoops (or listens) for IGMP messages on all the ports. The switch builds an IGMP snooping table that basically maps a multicast group to all the switch ports that have requested it.

The IGMP Querier can alternatively be deployed in the Access Layer for Backend VLANs in either the unprotected or protected tenant zones. Backend VLANs do not extend to the aggregation layer so a mrouter must be defined at the access layer.

UCS 6100 Fabric Interconnect

The UCS 6100 has IGMP snooping enabled by default. With IGMP snooping, the switch snoops (or listens) for IGMP messages on all the ports. The switch builds an IGMP snooping table that basically maps a multicast group to all the switch ports that have requested it.

Nexus 1000v

The Nexus 1000v has IGMP snooping enabled by default. With IGMP snooping, the switch snoops (or listens) for IGMP messages on all the ports. The switch builds an IGMP snooping table that basically maps a multicast group to all the switch ports that have requested it.

Multicast Deployment Guidelines

The following deployment guidelines were identified:

vPC

•IGMP snooping-The vPC peers should be configured identically.

QoS Implementation

This section describes the main categories of the Cisco QoS toolset used in Cisco VMDC 2.1. The following topics are covered at the relevant layers of the VMDC network:

•Classification

•Marking

•Queueing

Aggregation (Nexus 7010)

The following QoS topics are covered at the Cisco VMDC 2.1 aggregation layer:

•queuing-Defines MQC objects that you can use for queuing and scheduling as well as a limited set of the marking objects.

Classification and Marking

Classification tools mark a packet or flow with a specific priority. In Cisco VMDC 2.1 classification sets the packet priority for the datacenter by examining the following:

•Layer 2 Parameters (CoS)

•Layer 3 Parameters (DSCP or source and destination IP address)

•Layer 4 Parameters (TCP or UDP ports)

The marking policy then sets the packet priority based on the DSCP to CoS mappings defined in the policy. The original DSCP value is left unchanged and the CoS is remarked to the desired traffic class.

Figure 2-49 shows the interfaces where the classification and marking policy is applied in the inbound direction. The policy is placed on all of the core facing interfaces as well as all of the services facing interfaces. The classification and marking policy must be created and applied on a per tenant basis.

Figure 2-49 Nexus 7000 Classification and Marking Policy

The following configuration shows an example classification and marking policy for Tenant 1:

! Apply the qos service policy on all port-channel sub-interfaces facing Core

!

interface port-channel101.1

description T1U PC Subif to CORE-A

mtu 9216

encapsulation dot1q 3101

service-policy type qos input ingress-marking

vrf member T1U

no ip redirects

ip address 10.1.28.5/30

ip ospf cost 5

ip ospf network point-to-point

ip router ospf 1 area 0.0.0.0

ip pim sparse-mode

no shutdown

!

interface port-channel102.1

description T1U PC Subif to CORE-B

mtu 9216

encapsulation dot1q 3201

service-policy type qos input ingress-marking

vrf member T1U

no ip redirects

ip address 10.1.28.13/30

ip ospf cost 5

ip ospf network point-to-point

ip router ospf 1 area 0.0.0.0

ip pim sparse-mode

no shutdown

Queuing

Traffic queuing is the ordering of packets and applies to both input and output of data. Device modules can support multiple queues, which are used to control the sequencing of packets in different traffic classes.

In Cisco VMDC 2.1, two queueing considerations exist for the Nexus 7010 at the aggregation layer. The interface queueing policy designed as a 5 queue model and the fabric queueing which is always a 4 queue model.

Table 2-6 shows the Nexus 7000 hardware queuing capabilities for the M1 linecard and the current capabilities for the fabric.

Cisco Nexus 7000 I/O modules use virtual output queuing (VOQ) to ensure fair access to fabric bandwidth for multiple ingress ports transmitting to one egress port. Four classes of service are available in the switch fabric. The cos-to-queue mapping and the DWRR weights within the fabric cannot be modified. Table 2-8 shows cos-to-queue mapping within the fabric.

Table 2-8 Nexus 7000 Fabric CoS-to-Queue Mappings

Queue #

COS

Q0 (Strict Priority)

5-7

Q1

3-4

Q2

2

Q3

0-1

Note Strict priority traffic takes precedence over best-effort traffic across the fabric. Non-strict priority queues are serviced equally as they have the same DWRR weight.

Datacenter Services Node (Catalyst 6500)

The following topics are covered at the VMDC services layer:

•Queueing

Queueing

Traffic queuing is the ordering of packets and applies to both input and output of data. Device modules can support multiple queues, which are used to control the sequencing of packets in different traffic classes.

In Cisco VMDC 2.1 the DSN queueing policy is designed as a 4 queue model. Although the capabilities of the WS-X6708 card support more queues the implementation was simplified and implemented drop thresholds on the queues that were used.

Access (Nexus 5000)

The Cisco Nexus 5000 Series switch supports three policy types. The following QoS parameters can be specified in policy maps for each type of class:

•network-qos-A network-qos policy is used to instantiate system classes and associate parameters with those classes that are of system-wide scope.

•queuing-A type queuing policy is used to define the scheduling characteristics of the queues associated with system classes.

•qos-A type qos policy is used to classify traffic that is based on various Layer 2, Layer 3, and Layer 4 fields in the frame and to map it to system classes.

Classification

A type qos policy map is used to classify traffic that is based on various Layer 2, Layer 3, and Layer 4 fields in the frame and to map it to system classes. The traffic that matches this class are as follows:

•Class of Service-Matches traffic based on the CoS field in the frame header.

•Access Control Lists-Classifies traffic based on the criteria in existing ACLs.

To classify packets based on incoming CoS use the following system-level configuration:

Example 2-69 Nexus 5000 Classification of incoming CoS

class-map type qos class-fcoe

class-map type qos match-any class-gold

match cos 4

class-map type qos match-all class-bronze

match cos 1

class-map type qos match-all class-silver

match cos 2

class-map type qos match-all class-platinum

match cos 5

!

policy-map type qos system-level-qos

class class-platinum

set qos-group 5

class class-gold

set qos-group 4

class class-silver

set qos-group 3

class class-bronze

set qos-group 2

!

class-map type network-qos class-gold

match qos-group 4

class-map type network-qos class-bronze

match qos-group 2

class-map type network-qos class-silver

match qos-group 3

class-map type network-qos class-platinum

match qos-group 5

class-map type network-qos class-all-flood

match qos-group 2

class-map type network-qos class-ip-multicast

match qos-group 2

!

system qos

service-policy type qos input system-level-qos

To classify packets coming from an externally attached device using ACLs use the following interface level configuration:

Example 2-70 Nexus 5000 external classification based on ACL

ip access-list class-bronze-acl

10 permit ip 10.7.1.212/32 any

ip access-list class-gold-acl

10 permit ip 10.7.1.214/32 any

ip access-list class-platinum-acl

10 permit ip 10.7.1.215/32 any

ip access-list class-silver-acl

10 permit ip 10.7.1.213/32 any

!

class-map type qos match-all class-gold-external

match access-group name class-gold-acl

class-map type qos match-all class-bronze-external

match access-group name class-bronze-acl

class-map type qos match-all class-silver-external

match access-group name class-silver-acl

class-map type qos match-all class-platinum-external

match access-group name class-platinum-acl

!

policy-map type qos external-input-policy

class class-platinum-external

set qos-group 5

class class-gold-external

set qos-group 4

class class-silver-external

set qos-group 3

class class-bronze-external

set qos-group 2

!

interface port-channel4

description vpc netapp6080-1-7a

switchport mode trunk

vpc 4

switchport trunk allowed vlan 14,99,214-215

spanning-tree port type edge trunk

service-policy type qos input external-input-policy

!

interface port-channel5

description vpc netapp6080-2-7a

switchport mode trunk

vpc 5

switchport trunk allowed vlan 14,99,214-215

spanning-tree port type edge trunk

service-policy type qos input external-input-policy

Marking

A network-qos policy is used to instantiate system classes and associate parameters with those classes that are of system-wide scope. The actions that are performed on the matching traffic are as follows:

•MTU-The MTU that needs to be enforced for the traffic that is mapped to a system class. Each system class has a default MTU and the system class MTU is configurable.

•Queue Limit-This configuration specifies the number of buffers that need to be reserved to the queues of this system class. This option is not configurable for no-drop system classes.

•Set CoS value-This configuration is used to mark 802.1p values for all traffic mapped to this system class.

To set queue-limits, MTU, and CoS values use the following system configuration:

Example 2-71 EAST-N5020-A

system jumbomtu 9216

!

class-map type network-qos class-gold

match qos-group 4

class-map type network-qos class-bronze

match qos-group 2

class-map type network-qos class-silver

match qos-group 3

class-map type network-qos class-platinum

match qos-group 5

!

policy-map type network-qos system-level-qos

class type network-qos class-platinum

queue-limit 30000 bytes

mtu 9216

set cos 5

class type network-qos class-gold

queue-limit 30000 bytes

mtu 9216

set cos 4

class type network-qos class-silver

queue-limit 30000 bytes

mtu 9216

set cos 2

class type network-qos class-bronze

queue-limit 30000 bytes

mtu 9216

set cos 1

class type network-qos class-fcoe

pause no-drop

mtu 2158

class type network-qos class-default

mtu 9216

!

system qos

service-policy type network-qos system-level-qos

Queueing

A type queuing policy is used to define the scheduling characteristics of the queues associated with system classes. The actions that are performed on the matching traffic are as follows:

•Priority-Sets a system class for strict-priority scheduling. Only one system class can be configured for priority in a given queuing policy.

In Cisco VMDC 2.1 the Nexus 5000 queueing is designed a 5 class model. The use of CoS 3 is removed and reserved for future use of Fibre Channel over Ethernet.

Table 2-11 Nexus 5000 CoS-to-Queue Mappings

QoS Group

COS

5 (Platinum)

5

4 (Gold)

4

2 (Silver)

2

1 (Bronze)

1

Default

0

Example 2-72 Nexus 5000 Queuing

class-map type queuing class-gold

match qos-group 4

class-map type queuing class-bronze

match qos-group 2

class-map type queuing class-silver

match qos-group 3

class-map type queuing class-platinum

match qos-group 5

class-map type queuing class-all-flood

match qos-group 2

class-map type queuing class-ip-multicast

match qos-group 2

!

policy-map type queuing egress_queueing_policy

class type queuing class-platinum

priority

class type queuing class-gold

bandwidth percent 20

class type queuing class-silver

bandwidth percent 20

class type queuing class-bronze

bandwidth percent 20

class type queuing class-fcoe

bandwidth percent 0

class type queuing class-default

bandwidth percent 40

!

system qos

service-policy type queuing output egress_queueing_policy

Compute Layer Hardware (UCS 6100 FI)

System Classes

Cisco UCS uses Data Center Ethernet (DCE) to handle all traffic inside a Cisco UCS system. This industry standard enhancement to Ethernet divides the bandwidth of the Ethernet pipe into eight virtual lanes. System classes determine how the DCE bandwidth in these virtual lanes is allocated across the entire Cisco UCS system.

Each system class reserves a specific segment of the bandwidth for a specific type of traffic. This provides a level of traffic management, even in an oversubscribed system.

Figure 2-51 System Class Definitions in Cisco UCSM

Virtual Access (Nexus 1000v)

The following topics are covered at the VMDC virtual access layer:

•Classification

•Marking

Classification and Marking

As a best practice, identify and mark traffic (with COS and/or DSCP values) as close to the source as possible. On the Nexus 1000v, this marking is performed using the ingress port-profile that is applied to the VM interfaces.

The configuration below shows an example for marking traffic on a Frontend, Backend, and management VM interface.

Example 2-73 Nexus 1000v

! FRONT END APPLICATION TRAFFIC

!

ip access-list http

10 permit tcp any any eq www

class-map type qos match-all http_cos4

match access-group name http

!

policy-map type qos PUBLIC

class http_cos4

set cos 4

set dscp 34

class class-default

!

port-profile type vethernet T01U211

vmware port-group

switchport mode access

switchport access vlan 211

service-policy type qos input PUBLIC

no shutdown

description UnProtected Access Vlan211 Tenant#1

state enabled

!

! BACK END APPLICATION TRAFFIC

!

ip access-list nfs

10 permit tcp any any eq 2049

!

class-map type qos match-all nfs_cos5

match access-group name nfs

!

policy-map type qos PRIVATE

class nfs_cos5

set cos 5

set dscp 46

class class-default

!

port-profile type vethernet T01NJ215

capability l3control

vmware port-group

switchport mode access

switchport access vlan 215

pinning id 10

service-policy input PRIVATE

no shutdown

description Tenant_1_NAS_jumbo_frame

state enabled

!

! MANAGEMENT TRAFFIC

!

ip access-list mgmt_COS1

10 permit ip 10.0.34.0/24 any

ip access-list mgmt_COS2

30 permit ip 10.0.33.0/24 any

!

class-map type qos match-all mgmt_COS1

match access-group name mgmt_COS1

class-map type qos match-all mgmt_COS2

match access-group name mgmt_COS2

!

policy-map type qos mgmt

class mgmt_COS1

set cos 1

set dscp 10

class mgmt_COS2

set cos 2

set dscp 16

class class-default

!

port-profile type vethernet MGMT33

capability l3control

vmware port-group

switchport mode access

switchport access vlan 33

service-policy type qos input mgmt

no shutdown

system vlan 33

description Management Network - UCS (KVM/ESXi) Devices

state enabled

!

port-profile type vethernet MGMT34

vmware port-group

switchport mode access

switchport access vlan 34

pinning id 6

service-policy type qos input mgmt

no shutdown

max-ports 1024

description Management Network - EAST Spirent Virtual Machines

state enabled

UCS M81KR (Palo)

The following topic is covered at the UCS Hardware layer:

•Queueing

UCS QoS Policy

In the case where the Nexus 1000v is doing the CoS/DSCP marking, a special mode of operation termed as the "Trusted-CoS" mode is supported on the M81KR which sets the adapter to essentially "Pass-through" mode.

In this mode, the queuing behavior on the M81KR adapter is changed. The number of queues in this mode is reduced to 3: one is for control, one for FC, and one for Ethernet in which traffic from all Ethernet vNICs is directed.

This mode is enabled by choosing the Host Control as Full in the QoS policy, which is applied to a vNIC.

QoS Deployment Guidelines

The following deployment guidelines were identified:

Nexus 5000

•Optimized multicasting allows use of the unused multicast queues to achieve better throughput for multicast frames. If optimized multicast is enabled for the default drop system class, the system will use all six queues to service the multicast traffic (all six queues are given equal priority).

Nexus 1010 Deployment

The chosen deployment scenario uses the two lights-out management (LOM) interfaces for management traffic, and the four interfaces on the PCI card carry control, packet, and data traffic. This option is ideal for deploying additional virtual service blades like a Network Analysis Module (NAM).

The VLAN allocation and the Nexus 1010 configurations for the following VLANs are relevant for the NAM deployment:

Table 2-12 VLAN Allocations for Nexus 1010 VMI Configurations

Zone

Device

Description

VLAN id

Comment

VMI

VMI Nexus 5000

VMDC Nexus 5000

Infrastructure Device Management

32

Nexus 1010 management interface VLAN (LOM ports)

UCS (KVM/ESXi) Device Management

33

Nexus 1010 control, packet, and data traffic (Port Channel)

EAST-N1KV-CTRL/PKT

(VSM to VEM)

193

Nexus 1000v control and packet for VSM to VEM traffic

The virtual service blades must be installed on VLAN 33 to ensure that the traffic to and from the NAM uses the vPC on the Nexus 5000 and Nexus 1010 instead of the management (LOM) ports. Figure 2-53 shows a single Nexus 1010 in the management layer with the VLANs on the correct interfaces for reference.

Figure 2-53 Nexus 1010 in the Management Layer

Virtual Service Blade (VSB) Installation

The NAM must first be installed and configured on the Nexus 1010 Virtual Service Appliance as illustrated in Example 2-74.

After the NAM is installed and configured the NAM IP address can be typed in a browser to access the NAM Traffic Analyzer GUI and set up a managed device.

Figure 2-54 NAM Traffic Analyzer Web Interface

NetFlow

NetFlow Data Export (NDE) records offer an aggregate view of the network. When enabled on the local/remote switch, the NetFlow data source becomes available on the NAM without the need to create any SPAN sessions. The Cisco NAM can get detailed information on the packets through the NDE records without having to examine each packet, and hence more traffic can be analyzed. However, NetFlow only gives statistics for applications, hosts, and conversations. Detailed monitoring for voice, VLAN, IAP, DiffServ, and packet captures and decodes are not available with NetFlow.

To use Nexus 1000v device as an NDE data source for the NAM, the switch should be configured to export NDE packets to UDP port 3000 on the NAM.

Example 2-76 Nexus 1000v Configuration

!enable netflow

feature netflow

!

!create exporter and monitor

flow exporter test

description EAST N1Kv Exporter

destination 10.0.33.14 use-vrf management

transport udp 3000

source mgmt0

dscp 16

version 9

flow monitor test

description management flow monitor

record netflow-original

exporter test

timeout active 1800

cache size 4096

!

! apply netflow to a port profile

port-profile type vethernet MGMT33

capability l3control

vmware port-group

switchport mode access

switchport access vlan 33

service-policy type qos input mgmt

ip flow monitor test input

ip flow monitor test output

no shutdown

system vlan 33

description Management Network - UCS (KVM/ESXi) Devices

state enabled

NetFlow data sources are automatically learned when you create a device in the Devices section.

Figure 2-55 NetFlow Data Source in NAM Traffic Analyzer

The NetFlow Data Records should start to populate in the Monitoring tab.

Figure 2-56 NetFlow in the Monitoring Tab

And either Basic or Custom historical reports can be created.

Figure 2-57 NetFlow Reports in the NAM Traffic Analyzer

ERSPAN

To send the data directly to the NAM management IP address (data vlan), configure the ERSPAN source session on the Nexus 1000v. No ERSPAN destination session configuration is required on the NAM. After performing this configuration on the switch, the ERSPAN data source should appear on the NAM GUI and can then be selected to analyze the ERSPAN traffic.

An example ERSPAN monitor session configuration on the Nexus 1000v is illustrated in Example 2-77.

Example 2-77 Nexus 1000v ERSPAN Configuration

! enable l3control on the port-profile

!

EAST-N1000V(config-port-prof)# show run port-profile T01U211

port-profile type vethernet T01U211

capability l3control

vmware port-group

switchport mode access

switchport access vlan 211

no shutdown

description UnProtected Access Vlan211 Tenant#1

state enabled

!

! create monitor session

EAST-N1000V(config)# sho run | begin erspan-source

monitor session 1 type erspan-source

source port-profile T01U211 both

destination ip 10.0.33.14

erspan-id 1

ip ttl 64

ip prec 0

ip dscp 0

mtu 1500

header-type 2

no shut

!

EAST-N1000V(config)# show monitor session 1

session 1

---------------

type : erspan-source

state : up

source intf :

rx :

tx :

both :

source VLANs :

rx :

tx :

both :

source port-profile :

rx : T01U211

tx : T01U211

both : T01U211

filter VLANs : filter not specified

destination IP : 10.0.33.14

ERSPAN ID : 1

ERSPAN TTL : 64

ERSPAN IP Prec. : 0

ERSPAN DSCP : 0

ERSPAN MTU : 1500

ERSPAN Header Type: 2

Once the monitor session is setup the ERSPAN sessions should be seen in the NAM GUI and overview statics should be seen.

Figure 2-58 ERSPAN in the NAM Traffic Analyzer

From there selecting ERSPAN as a Data Source will allow operations like packet capturing, real time reporting, and host or protocol details.

Figure 2-59 ERSPAN Data Source Selection in the NAM Traffic Analyzer

And either Basic or Custom historical reports can be created.

Figure 2-60 Example ERSPAN Report in NAM Traffic Analyzer

NAM Deployment Guidelines

The following NAM deployment guidelines were identified:

•The NAM data VLAN is used for both management and data (packet) collection for the virtual NAM. Unlike the Nexus 1000v VSM, the virtual NAM does not inherit the management VLAN from the VSB. The IP address assigned to the NAM must be in the data VLAN.