VXLAN Load Balancing

Virtual extensible LAN
(VXLAN) load balancing allows you to ensure that data moves efficiently between
Cisco AVS and the leaf switch (a Nexus 9000) over multiple uplinks when you
have a MAC pinning policy and VXLAN encapsulation.

The specific
combination of VXLAN encapsulation and MAC pinning does not provide a built-in
load-balancing mechanism when there are multiple uplinks between Cisco AVS and
the leaf switch. Note that this limitation does not apply if either VLAN
encapsulation and/or LACP are used. When the host (server) is added into a
distributed virtual switch (DVS) in VMware vCenter, the Cisco APIC creates a
VMware kernel NIC (vmknic) that is used for VXLAN encapsulation to send data
packets.

This vmknic uses only
one of the many available uplinks to send packets if MAC pinning is the link
aggregation method. So we recommend that you enable VXLAN load balancing if you
use VXLAN encapsulation and also use MAC pinning as your link aggregation
method.

Note

If you have a Cisco Fabric Extender (FEX), you cannot enable VXLAN load balancing if the FEX is connected directly to a Nexus 9000 system. See the "Cisco Fabric Extender" topology section in the Cisco Application Virtual Switch Installation Guide for more information about restrictions using a FEX connected directly to leaf switch.

Note

VXLAN load balancing is enabled by default. However, to use it effectively, you need to configure additional VMK NICs to match the number of PNICs.

Enabling VXLAN Load
Balancing

VXLAN load balancing
is automatically enabled as soon as more than one vmknic is connected to the
Cisco AVS. Each vmknic can use only one uplink port, so we recommended that you
make sure you have an equal number of VMknics and uplinks. A maximum of eight
vmknics can be attached to a Cisco AVS switch.

Each of the vmknics
that you create has its own software-based MAC address. In VXLAN load
balancing, the vmknics provide a unique MAC address to packets of data that can
then be directed to use certain physical NICs (PNICs).

For example, with
VXLAN load balancing, virtual machine (VM)b sends a packet and vmknic2 puts a
header on it with its own MAC address and sends it out through PNIC3, VMc sends
a packet and vmknic4 puts a header on it with its own MAC address and sends it
out through PNIC2, and so on.

You need to have as
many vmknics as the host has PNICs, up to a maximum of eight. For example, if
the host has five PNICs, you need to add four vmknics to enable VXLAN
load-balancing; the Cisco APIC already created one vmknic when the host was
added to the distributed virtual switch (DVS).

Note

Do not delete or
attempt to configure the original vmknic on the host. If you do so, you might
lose the ability to communicate with the Cisco
APIC.

Before You Begin

You need to have as
many vmknics as the hosts has PNICs, up to a maximum of eight.

You also must
verify the number of PNICs on the server and verify that the number of PNICs
you need to use are up. See the product documentation for the server for
specific instructions.

Step 1

In VMware
vSphere Web Client, complete the following steps to ensure that as many uplinks
as you need are part of the DVS.

Choose
vCenter >
Inventory trees >
Networking.

In the
navigation pane, expand the data center folder, expand the folder that contains
the DVS and server (ESX host), and then expand the DVS.

Click
Portgroups under the DVS.

The
Portgroups pane lists the port groups on the DVS.

In the
Filter field, type
uplink and then press Enter.

The port
group called Uplink appears in the
Portgroups pane.

Click
Uplink, click the
Manage
tab, and then click the
Ports bar.

The list
of uplinks appears.

Verify
that you have the correct number of uplinks, listed as vmknics in the
Connectee column and that they are up.

In the
navigation pane, click the DVS folder, and then in the central pane, click the
Related Objects tab, and then click the
Distributed Switches bar.

In the
central pane, right-click the DVS and then select
Add
and Manage Hosts.

In the
Add
and Manage Hosts dialog box, choose
Manage host networking and then click
Next.

In the
Select Hosts pane, click
Attached Hosts.

In the
Select member hosts dialog box, choose the server by
checking its check box, and then click
OK.

In the
Select Hosts pane, click
Next.

In the
Select network adapter tasks pane, ensure that only
Manage physical adapters is selected, and then click
Next.

In the
Manage physical network adapters pane, verify that
the desired number of uplinks are part of the DVS.

Take one
of the following actions:

If the
desired number of uplinks are part of the DVS, click
Cancel and go on to Step 2.

If you
need to add uplinks to the DVS, select a vmnick and then use the
Assign Uplink option before proceeding to Step 2.

Step 2

In VMware
vSphere Web Client, complete the following steps to add a vmknic to the host:

Choose
vCenter >
Inventory trees >
Networking.

In the
navigation pane, expand the data center folder, and select the DVS folder than
contains the DVS and the server (ESX host).

In the
navigation pane, click the DVS folder, and then in the central pane, click the
Related Objects tab, and then click the
Distributed Switches bar.

In the
central pane, right-click the DVS and then select
Add and Manage Hosts.

In the
Add and Manage Hosts dialog box, choose
Manage host networking and then click
Next.

In the
Select Hosts pane, click
Attached Hosts.

In the
Select member hosts dialog box, choose the server by
checking its check box, and then click
OK.

In the
Select Hosts pane, click
Next.

In the
Select network adapter tasks pane, ensure that only
Manage VMkernel adapters is selected, and then click
Next.

In the
Managed VMKernel network adapters pane, click
New adapter.

The
Add Networking dialog box opens.

In the
Select target device pane, click
Select an existing distributred port group if it is
not already selected, and then click
Browse.

In the
Select Network dialog box, in the
Filter field, type
vtep and then press
Enter.

The search
filter shows only the vtep port group.

Choose the
vtep port group and then click
OK.

In the
Select Target device dialog box, click
Next.

In
Connection Settings pane, under
Port properties, ensure that
Network label appears as vtep (DVS-name) and
that the IP settings are IPv4, and then click
Next.

In the
IPV4 settings pane, ensure that
Obtain IPv4 settings automatically is selected and
then click
Next.

In the
Ready to complete pane, review all the settings to
ensure that they are correct and then click
Finish.

Step 3

Repeat steps j
through q for each vmknic that you need to add to the host until number of
uplinks in the DVS equal the number of vtep vmk ports.

Disabling VXLAN Load
Balancing

To disable VXLAN
load balancing, you remove vmknics. However, how you remove them depends on
circumstances:

If you are
disabling VXLAN load balancing but are not changing MAC pinning policy, remove
all vmknics and add one back.

If you are
disabling VXLAN load balancing and changing the policy, the vmknics are no
longer used for VXLAN load balancing. However, you can remove them following
the steps in this procedure.

If you are
removing the host from the DVS, remove all vmknics from the DVS.

Step 1

In VMware
vSphere Web Client, complete the following steps to remove the vmknic from the
host.

Choose
vCenter >
Inventory trees >
Networking.

In the
navigation pane, expand the data center folder, and select the DVS folder than
contains the DVS and the server (ESX host).

In the
navigation pane, click the DVS folder, and then in the central pane, click the
Related Objects tab, and then click the
Distributed Switches bar.

In the
central pane, right-click the DVS and then select
Add and Manage Hosts.

In the
Add
and Manage Hosts dialog box, choose
Manage host networking and then click
Next.

In the
Select Hosts pane, click
Attached Hosts.

In the
Select member hosts dialog box, choose the server by
checking its check box, and then click
OK.

In the
Select Hosts pane, click
Next.

In
Select network adapter tasks pane, ensure that only
Manage VMKernel adapters is selected, and then click
Next.

In the
Managed VMkernel network adapters pane, click the VM
network adapter to remove it; if you want to remove multiple VM network
adapters, click all of them.

Click
Remove and then click
Next.

In the
Analyze impact pane, click
Next.

In the
Ready to complete pane, review the adapters or
adapters to remove and click
Finish.

Step 2

In VMware
vSphere Web Client, complete the following steps to remove the uplink
associated with the vmknic.

Choose
vCenter >
Inventory trees >
Networking.

In the
navigation pane, expand the data center folder, and select the DVS folder than
contains the DVS and the server (ESX host).

In the
navigation pane, click the DVS folder, and then in the central pane, click the
Related Objects tab, and then click the
Distributed Switches bar.

In the
central pane, right-click the DVS and then select
Add and Manage Hosts.

In the
Add
and Manage Hosts dialog box, choose
Manage host networking and then click
Next.

In the
Select Hosts pane, click
Attached Hosts.

In the
Select member hosts dialog box, choose the server by
checking its check box, and then click
OK.

In the
Select Hosts pane, click
Next.

In the
Select network adapter tasks pane, ensure that only
Manage physical adapters is selected, and then click
Next.

In the
Manage Physical adapters pane, choose the number of
uplinks that are part of the DVS and are no longer associated with the VM VXLAN
tunnel endpoint (VTEP) that were removed in Step 1.

Click
Unassign adapter and then click
Next.

In the
Analyze impact pane, click
Next.

In the
Ready to complete pane, review the uplinks being
removed and click
Finish.

Mixed-Mode
Encapsulation Configuration

Beginning with Cisco AVS Release 5.2(1)SV3(2.5), you can configure a single VMM domain to use VLAN and VXLAN encapsulation. Previously, encapsulation was determined solely by the presence of VLAN or multicast pools, and you needed to have separate VMM domains for EPGs using VLAN and VXLAN encapsulation.

When you create a VMM domain, you now can explicitly choose its encapsulation mode: VLAN, VXLAN, or unspecified. Unspecified mode adopts the allocation mode based on existing pools, as in pre-5.2(1)SV3(2.5) releases. That is, VLAN is adopted if a VLAN pool is present, and VXLAN is adopted if there is a multicast pool present and no VLAN pool is present. If you do not explicitly choose an encapsulation mode, Unspecified mode will be configured for the VMM domain.

When you create a new EPG for the VMM domain, each EPG for the domain by default uses the VMM domain's encapsulation mode. However, when you create a new EPG and associate it with a domain, you now can configure the EPG to override the domain encapsulation mode and use another mode.

For example, you might choose VLAN configuration when you create a VMM domain. When you create a new EPG for the domain, you might configure it to use VLAN—the domain mode—or you might configure it to use VXLAN.

You can configure the encapsulation mode, check it, and override it through the Cisco APIC GUI, NX-OS style CLI, or REST API. See the procedures in this section for instructions.

Note

Mixed-mode encapsulation is available for Cisco AVS in local switching mode only.

Benefits of Mixed-Mode Encapsulation

Mixed-mode encapsulation enables you to have a single domain for all EPGs, regardless of encapsulation mode. Previously, you needed to create and maintain separate domains for EPGs using VLAN—such as for management, vMotion, storage traffic, or L4-L7 services—and EPGs using VXLAN—such as for tenant and data. Mixed-mode encapsulation makes it easier to keep track of and manage EPGs.

Upgrade and Downgrade Considerations

If you are upgrading the Cisco AVS from a previous release to 5.2(1)SV3(2.5), the Cisco AVS will use the preupgrade encapsulation mode on existing VMM domains. For example, if the encapsulation mode for a VMM domain before the upgrade was VXLAN, the encapsulation mode for VMM domain will continue to be VXLAN after the upgrade. If the preupgrade encapsulation mode was VLAN, the encapsulation mode for the VMM domain will continue to be VLAN after the upgrade. However, you can change the encapsulation mode for a domain after an upgrade.

In a release earlier than 5.2(1)SV3(2.5), if Layer 4-Layer 7 services graphs are deployed in a VMM domain with default VLAN encapsulation, VLAN will continue to be the default encapsulation for the domain after upgrading to 5.2(1)SV3(2.5). However, if you need to place Layer 4-Layer 7 services EPGs and all other EPGs into a mixed-mode VMM domain, you need to delete the existing service graph, add all the needed hosts to the new mixed-mode VMM domain, and then deploy the service graph, freshly defining its consumer and provider according to the new VMM domain.

Note

You need to complete the upgrade of ACI fabric and Cisco AVS before creating a mixed-mode VMM domain or modifying an existing domain to be mixed. See the section "Recommended Upgrade Sequence for Cisco APIC, the Fabric Switches, and the Cisco AVS" in the Cisco Application Virtual Switch Installation Guide.

If you are downgrading to a release earlier than 5.2(1)SV3(2.5), you should remove mixed-mode encapsulation before you downgrade. You should reset all the EPGs that you deployed in mixed mode to VLAN encapsulation and proceed with downgrade.

Encapsulation Pool Combinations

Your ability to add and delete VLAN and multicast pools for a VMM domain depends on whether EPGs are associated with the domain.

If no EPGs are associated with the VMM domain, you can add and delete VLAN and multicast pools, regardless of whether the VMM domain default encapsulation mode is Unspecified, VLAN, or VXLAN.

If EPGs are associated with the VMM domain, your ability to add and delete pools depends on the VMM domain default encapsulation mode:

Unspecified—You can configure only one type of pool, either VLAN or multicast. If you configure a pool with a different mode, an error is triggered with a message asking you to set a default encapsulation mode.

VLAN—You can configure both VLAN and multicast pools. However, you cannot delete existing multicast pools.

VXLAN—You can configure both VLAN and multicast pools. However, you cannot delete existing multicast pools.

auto—This causes the EPG to use the same encapsulation mode as the VMM domain. This is the default configuration.

vlan—This overrides the domain's VXLAN configuration, and the EPG will use VLAN encapsulation. However, a fault will be triggered for the EPG if a VLAN pool is not configured on the domain.

vxlan—This overrides the domain's VLAN configuration, and the EPG will use VXLAN encapsulation. However, a fault will be triggered for the EPG if a multicast pool is not configured on the domain.

Configuring a Port
Channel or Virtual Port Channel Using the Advanced GUI

Caution: Cisco recommends that you do not mix configuration modes (Advanced or Basic). When you make a configuration in either mode and change the configuration using the other mode, unintended changes can occur. For example, if you apply an interface policy to two ports using Advanced mode and then change the settings of one port using Basic mode, your changes might be applied to both ports.

Step 1

Log in to the
Cisco
APIC,
choosing
Advanced mode.

Step 2

Choose
Fabric >
Access
Policies.

Step 3

Open
the Interface
Policies folder.

Step 4

Right-click the
Profiles folder and choose
Create
Interface Profile.

Step 5

In the
Create
Interface Policy dialog box, enter a name for the policy in the
Name field.

Step 6

In the
Interface Selectors field, click
+ to add an access port selector.

Step 7

In the
Create
Access Port Selector dialog box, complete the following steps:

In the
Name field, enter a name for the access port.

In the
Interface IDs field, enter the interface IDs where
the host is located.

In the
Create
Port Channel Policy dialog box, complete the following actions:

In the
Name field, enter a name for the policy.

In the
Mode field, choose one of the following options
appropriate to your setup:

Static Channel - Mode
On

LACP Active

LACP Passive

MAC Pinning

MAC
Pinning-Physical-NIC-load

Note

LACP
Passive mode is not supported for directly connected hosts. Ports using LACP
Passive mode do not initiate an LACP handshake. We recommend that you always
use LACP Active instead of LACP Passive. LACP Passive can be used only with
AVS/TOR policy groups when there is an intermediate Layer 2 device and the
Layer 2 device ports are using LACP Active mode.

Note

MAC
Pinning-Physical-NIC-load mode is not supported for Cisco AVS.

Click
SUBMIT.

Step 10

In the
Create PC
Interface Policy Group or
Create
VPC Interface Policy Group dialog box, in the
Attached
Entity Profile field, choose
default, and then click
SUBMIT.

Step 11

In the
Create
Access Port Selector dialog box, click
OK.

Step 12

In the
Create
Interface Policy dialog box, click
SUBMIT.

Configuring a Port
Channel or VPC and a Port Channel Policy Using the Basic GUI

Caution: Cisco recommends that you do not mix configuration modes (Advanced or Basic). When you make a configuration in either mode and change the configuration using the other mode, unintended changes can occur. For example, if you apply an interface policy to two ports using Advanced mode and then change the settings of one port using Basic mode, your changes might be applied to both ports.

Step 1

Log into Cisco
APIC, choosing
Basic mode.

Step 2

Choose
Fabric > Inventory.

Step 3

In the
Inventory navigation pane, choose the pod and then
click the
Configure tab.

Step 4

In the pod work
pane, click
ADD
SWITCHES.

Step 5

In the
ADD
SWITCHES dialog box, choose the switch or node to be configured and
then click
ADD
SELECTED.

Step 6

In the pod work
pane, on the image of the switch or node, click the ports to be configured.

Step 7

In the
Summary pane, click
CONFIGURE PC
or
CONFIGURE VPC.

Step 8

In the
Port
Channel or
VPC pane, in the
Policy
Group Name field, enter a name for the policy group.

Step 9

Click the
VLAN
tab, from the VLAN Domain drop-down list, choose a
domain.

Configuring a Port
Channel Policy Using the Cisco Advanced GUI

Caution: Cisco recommends that you do not mix configuration modes (Advanced or Basic). When you make a configuration in either mode and change the configuration using the other mode, unintended changes can occur. For example, if you apply an interface policy to two ports using Advanced mode and then change the settings of one port using Basic mode, your changes might be applied to both ports.

Step 1

Log in to the
Cisco
APIC,
choosing
Advanced mode.

Step 2

On the menu bar,
choose
Fabric >
Access
Policies.

Step 3

In the
Policies navigation pane, open the
Interface Policies folder and then open the
Policies folder.

Step 4

Right-click the
Port
Channel Policies folder.

Step 5

Choose
Create
Port Channel Policy.

Step 6

In the
Create
Port Channel Policy dialog box, enter the policy name in the
Name field.

Step 7

In the
Mode field, choose one of the following options
appropriate to your setup:

Static Channel - Mode
On

LACP Active

LACP Passive

MAC Pinning

Step 8

Click
SUBMIT.

Configuring a Port Channel Policy Using the REST API

Configuring an LACP
Port Channel Policy Using the REST API

Step 1

Create a node
profile that specifies the leaf IDs that the access port profiles are
associated with.

LACP Load Balancing Configuration

Port channels provide load balancing by default for Cisco AVS, distributing the traffic from Cisco AVS to upstream devices over available uplink ports. You can choose the parameters, or methods, for LACP load balancing.

Beginning with Cisco AVS Release 5.2(1)SV3(2.5), you can configure LACP load balancing for Cisco AVS using one more than a dozen different parameters. Previously, LACP load balancing was automatically configured, using the source MAC address. Having additional parameters gives you greater flexibility in balancing upstream traffic from Cisco AVS using LACP.

You configure LACP load balancing by issuing a vemcmd command through the ESXi CLI. You cannot configure LACP load balancing through the Advanced GUI, Basic GUI, or REST API.

The following table lists the CLI parameters that you can enter and their descriptions:

vemcmd command parameter

Description

destination-mac

Destination MAC address

source-mac

Source MAC address

source-dest-mac

Source and Destination MAC address

destination-ip-vlan

Destination IP address and VLAN

source-ip-vlan

Source IP address and VLAN

source-dest-ip-vlan

Source and destination IP address and VLAN

destination-port

Destination TCP/UDP port number

source-port

Source TCP/UDP port number

source-dest-port

Source and destination TCP/UDP port number

dest-ip-port

Destination IP address and TCP/UDP port number

source-ip-port

Source IP address and TCP/UDP port number

source-dest-ip-port

Source and destination IP address and TCP/UDP port number

dest-ip-port-vlan

Destination IP address, TCP/UDP port number, and VLAN

source-ip-port-vlan

Source IP address, TCP/UDP port number, and VLAN

source-dest-ip-port-vlan

Source and destination IP address, TCP/UDP port number, and VLAN

destination-ip

Destination IP address

source-ip

Source IP address

source-dest-ip

Source and destination IP address

vlan-only

VLAN only

source-virtual-port-id

Source virtual port ID

Note

The load balancing methods that use port channels do not apply to multicast traffic. Regardless of the method configured, multicast traffic uses the following methods for load balancing with port channels:

Configuring LACP Load Balancing

Before You Begin

You need to have a port channel LACP policy already configured.

You might have configured a policy if you used the unified configuration wizard when you installed Cisco AVS. See the section "Creating Interface and Switch Profiles and a vCenter Domain Profile Using the Advanced GUI" in the Cisco AVS Installation Guide. If you have not created port channel LACP policy, see the procedures in the section Configuring a Port Channel Policy in this guide.

For each
interface policy group that you choose, choose
All
or
Specific.

If you
choose
All
, the attached entity will apply to all interfaces
associated with the policy group. If you choose
Specific, you need to choose a switch ID from the
Switch
IDs drop-down list that appears to the right of the interface policy group
list.

Configuring vSwitch
Override Policies on the VMM Domain Using the Advanced GUI

Before installing Cisco AVS, you can use the configuration wizard to create a VMware vCenter profile and create interface policy group policies for Cisco AVS. You also can create vSwitch policies that override the interface policy group policies and apply a different policy for the leaf.

However, if you did not use the configuration wizard—or if you used the configuration wizard but did not configure a vSwitch override policy—you can configure a vSwitch override policy by following the procedure in this section.

Note

In Cisco AVS 5.2(1)SV3(1.10), you cannot create a Distributed Firewall policy on the vSwitch using the configuration wizard.
See the section "Configuring Distributed Firewall" in the Cisco ACI Virtualization Guide for instructions for configuring a Distributed Firewall policy and associating it to the VMM domain.

Note

Previously, you could configure a vSwitch override policy through the Fabric tab as well as the VM Networking tab. Override policies configured through the VM Networking took precedence. However, any override policy configured through the Fabric tab stands until it is reconfigured through the VM Networking tab.

Before You Begin

We recommend that
you already have created access policies and an attachable access entity
profile for Cisco AVS.

Step 1

Log in to the
Cisco APIC, choosing Advanced mode.

Step 2

Go to VM Networking > Inventory > VMware.

Step 3

In the navigation pane, choose the relevant VMM domain.

Step 4

In the VMM domain work pane, scroll to the VSwitch Policies area, and from the appropriate vSwitch policy drop-down list, choose the policy that you want to apply as an override policy.

Step 5

Click SUBMIT.

What to Do Next

Verify that the
policies are in effect on Cisco AVS.

Configuring SPAN
Features

You cannot use the
Cisco AVS
VXLAN tunnel endpoints (VTEPs), uplinks, or port channel as the source or
destination of a SPAN sessions. The
Cisco AVS
supports 64 SPAN sessions per DVS (local SPAN and ERSPAN). A source can be a
member of a maximum of four SPAN sessions.

Guidelines for
configuring SPAN

Follow these
guidelines when you configure local SPAN sessions on the Cisco AVS:

They are limited to a single
vLeaf per session.

They are defined by a
destination access port or client end point (CEP). EPG as a destination is not
supported.

They are
deployed on the vLeaf when a destination CEP is defined.

No regular
traffic is allowed from or to the destination CEP.

Guidelines for
Configuring ERSPAN

Follow these
guidelines when you configure ERSPAN sessions on the
Cisco AVS:

They are defined based on an
IP address with other optional parameters.

They can be deployed on
multiple vLeafs.

They are deployed to a vLeaf
when a source CEP or endpoint group (EPG) is defined.

The destination for an ERSPAN
session should always be in overlay-1 (infraVRF [virtual routing and
forwarding]). If the destination is a VM behind the AVS, bring it up in the
VTEP EPG.

The ERSPAN
destination should always be remote. ERSPAN from a
Cisco AVS
to a destination hosted behind the same
Cisco AVS
is not supported.

If the ERSPAN destination is
a VM, make sure that VMotion is disabled on it. If the ERSPAN destination VM is
moved to another host for any reason, make sure that the static CEP is
configured accordingly. See Step 29 through Step 32 in the section "Configuring
SPAN Features Using the Advanced GUI."

The IP address
for the destination can be obtained using DHCP (Option 61 is needed during
DHCP) or static configuration. Make sure that the IP address is in the same
subnet as the other VTEPs in overlay-1 (infra VRF).

Note

Not all
operating systems for VMs and devices support Option 61 for DHCP. In those
cases, use a static IP address on infra VLAN. Choose a static IP address for
ERSPAN carefully because it might lead to an IP conflict with the leased DHCP
IPs on infra VLAN.

Guidelines for
Configuring SPAN or ERSPAN with a UCS B Series Server

If you want to
configure SPAN or ERSPAN on Cisco AVS, and the Cisco AVS hosts are running on a
UCS B Series server, you must configure a port channel (PC) interface policy
group with MAC pinning for the interfaces connecting to the fabric
interconnects. This is because the virtual source (vsource) and virtual
destination (vdestination) groups are specified only on PC policy groups.

Understanding Bridge
Protocol Data Unit Features

The following sections
describe supported bridge protocol data unit (BPDU) features on the
Cisco AVS
with the Cisco
APIC.
BPDU guard and BPDU filtering are switch-wide features, and they are applicable
only for VM veth ports.

BPDU
Guard

BPDU Guard prevents
loops by moving a nontrunking port into an errdisable state when a BPDU is
received on that port. When you enable BPDU guard on the switch, the interface
is moved to blocking state on receiving a BPDU.

BPDU Guard provides
a secure response to invalid configurations because the administrator must
manually put the interface back in service. A VM port must be disconnected from
and then reconnected to the Cisco AVS or an EPG port group through vCenter to
put the interface back in service.

BPDU
Filtering

BPDU filtering
prevents sending and receiving of BPDUs on ports. Any BPDU that are received
are dropped when filtering is enabled. BPDU filtering is enabled on VM veth
ports by default. When you enable this feature, Cisco AVS drops all BPDUs
received on uplink ports.

Note

In Cisco AVS
5.2(1)SV3(1.5) and later releases, we recommend that you configure BPDU policy
in a single policy interface group. Configuring BPDU in multiple policy
interface groups leads to inconsistent behavior.

Note

In an L2 switch
extended topology, we recommend that you configure BPDU policy through an
attached entity profile vSwitch policy override. If the interface policy group
is used for configuration, then BPDU Guard/filter will be enabled on the Leaf
ports, causing those ports to become error-disabled when they receive BPDU
packets from an L2 switch. For information about configuring BPDU policy
through an override policy, see the section "Modifying the Interface Policy
Group to Override the vSwitch-Side Policies" in the
Cisco
Application Virtual Switch Installation Guide.

Configuring BPDU
Features Using the Advanced GUI

Caution: Cisco recommends that you do not mix configuration modes (Advanced or Basic). When you make a configuration in either mode and change the configuration using the other mode, unintended changes can occur. For example, if you apply an interface policy to two ports using Advanced mode and then change the settings of one port using Basic mode, your changes might be applied to both ports.

Configuring BPDU
Features Using the Basic GUI

Caution: Cisco recommends that you do not mix configuration modes (Advanced or Basic). When you make a configuration in either mode and change the configuration using the other mode, unintended changes can occur. For example, if you apply an interface policy to two ports using Advanced mode and then change the settings of one port using Basic mode, your changes might be applied to both ports.

Step 1

Log in to Cisco
APIC, choosing
Basic mode.

Step 2

Choose
VM
Networking > Inventory.

Step 3

In the
Inventory navigation pane, open the
VMware folder and then choose the VMM domain for
which you configure BPDU features.

Step 4

In the domain
work pane, in the
VSwitch
Policies area, from the
STP
Policy drop-down list, choose
Create
Spanning Tree Interface Policy.

Step 5

In the
Create
Spanning Tree Interface Policy dialog box, complete the following
steps:

Configuring In-band
Management Access and Cisco AVS Host Management

To configure host
management for Cisco AVS, you first must configure in-band management for Cisco
APIC and the Cisco ACI fabric. You then must make the Cisco AVS nodes part of
in-band management network, verify that the port group has been created for the
management EPG, and then migrate a management VM kernel (vmk) for the
management EPG port group.

Configuring Cisco
AVS Host Management Using the GUI

After you configure
in-band management for Cisco APIC, you need to configure host management on
Cisco AVS.

Caution: Cisco recommends that you do not mix configuration modes (Advanced or Basic). When you make a configuration in either mode and change the configuration using the other mode, unintended changes can occur. For example, if you apply an interface policy to two ports using Advanced mode and then change the settings of one port using Basic mode, your changes might be applied to both ports.

Before You Begin

You must have
configured in-band management for Cisco APIC and Cisco ACI fabric. Follow the
procedure "Configuring In-Band Management Access Using the Advanced GUI" in the
Cisco APIC
Getting Started Guide, Release 1.2(1x).

Step 1

Log in to Cisco
APIC, choosing
Advanced or
Basic mode.

Step 2

Make the Cisco
AVS nodes part of in-band management by completing the following steps:

Go to
Tenants > mgmt and then open the following
folders:
Application Profiles,
default,
Application EPGs, and
EPG
default.

Choose the
Domains (VMs and Bare-Metals) folder.

In the
Domains (VMs and Bare-Metals) work pane, from the
ACTIONS drop-down list, choose
Add
VMM Domain Association.

Guidelines and
Limitations for Configuring IGMP Snooping and Querier

Depending on your
setup, you might need to configure IGMP on Layer 2 switches or on infra tenant
or administrator-created tenant bridge domains. This section provides
guidelines for two common scenarios when you must configure IGMP protocol
snooping and querier.

Note

Cisco AVS does not
support IGMP snooping. The guidelines and limitations and configuration
procedures for IGMP snooping in this section are for configuring IGMP snooping
on the leaf switch.

Minimizing
Multicast Flooding for VXLAN-Encapsulated Traffic

To minimize
multicast flooding for VXLAN-encapsulated traffic originating from and
terminating on the
Cisco AVS,
and if a Layer 2 device is between the leaf and the
Cisco AVS,
do the following:

Enable IGMP
snooping on each of any Layer 2 devices between the leaf and the
Cisco AVS.
Follow the instructions that are specific to the device. For example, if the
Layer 2 device is a Cisco Nexus 5000 Series switch, see the instructions in the
configuration guide for that switch.

If a
multicast-capable router is not present in the network, you must configure IGMP
querier on the leaf. Alternatively, you must configure IGMP on each Layer 2
switch on a static multicast router port.

You should
enable IGMP querier on the infra tenant bridge domain subnet through the Cisco
APIC.
See the instructions in the section "Configuring IGMP Querier."

Sending or
Receiving Multicast Streams with Virtual Machines

If you have virtual
machines connected to the
Cisco AVS
and want to send or receive multicast streams, do the following:

Enable IGMP
snooping on each Layer 2 device between the leaf and the
Cisco AVS.
Follow the instructions that are specific to the device. For example, if the
Layer 2 device is a Cisco Nexus 5000 Series switch, see the instructions in the
configuration guide for that switch.

If a
multicast-capable router is not present in the network, you must configure IGMP
querier on the leaf for the administrator-created tenant bridge domain that the
VMs belong to. Alternatively, you must configure IGMP querier on each Layer 2
switch on a static multicast router port.

If you have
multiple administrator-created tenant bridge domains, you must configure IGMP
querier on each administrator-created tenant bridge domain through the
APIC.
See the instructions in the section "Configuring IGMP Querier."

If the multicast
traffic originates from or terminates on the VMs will be VXLAN-encapsulated,
follow all the guidelines in the previous section as well as this one.

Order of
configuration

You must configure
IGMP querier before you configure IGMP snooping.

Configuring IGMP
Querier Using the Advanced GUI

Step 1

Log in to the
Cisco
APIC,
choosing
Advanced mode.

Step 2

Complete one of
the following series of steps, depending on the type of tenant:

If you have
...

Then...

An infra
tenant

Choose
Tenants >
infra.

In the
navigation pane, open the following folders:
Networking > Bridge
Domains > default > Subnets.

Choose
the subnet in the
Subnets folder.

In the
Properties work pane, in the
Subnet Control area, make sure that the
Querier IP check box is checked.

Click
SUBMIT.

An administrator-created
tenant

Choose
Tenants and then choose the tenant on which you want
to configure the IGMP querier.

In the tenant navigation
pane, open the
Networking folder, the
Bridge Domain folder, and then the folder for the
bridge domain created earlier for the tenant.

If the
selected bridge domain already has a subnet with a gateway IP, you can use it
to enable IGMP querier in the
Subnet Control area, or you can follow the remaining
steps to create a new subnet to enable IGMP querier.

Using the ACI
Simulator

The
ACI
Simulator
consists of the
Application Policy Infrastructure Controller
(APIC) software running on a UCS C-series server, controlling a set of five
software-simulated ACI switches connected in a simulated fabric topology. You
also can add the
Cisco AVS
to the
ACI
Simulator.

Using the
ACI
Simulator
allows you to explore the APIC GUI and to create configurations in the
simulated fabric. Because the simulator also can be configured using the CLI or
the REST API, it also allows you to develop scripts and applications without
having to buy physical switches.

Note

The ACI Simulator
supports Access Port Policy Group and single-interface PC Interface Policy
Group with the LACP mode MAC pinning or Off (static port channel).

If you want to use
Cisco AVS
with the
ACI
Simulator,
follow the instructions in the
Cisco ACI Simulator Getting Started Guide.

However, when
following the procedures in the
Cisco ACI Simulator Getting Started Guide,
you need to take a few additional steps:

When creating a
vCenter domain profile in the APIC GUI, when you choose the
Cisco AVS
as your virtual switch, be sure to choose
Local
Switching or
No Local
Switching as your switching preference and
VLAN or
VXLAN as your encapsulation type.

After you deploy
an application policy, install the
Cisco AVS,
following the instructions in the
Cisco Application Virtual Switch Installation Guide.

Add
Cisco AVS
hosts to the DVS, following the instructions in the
Cisco Application Virtual Switch Installation Guide.

Return to the
Cisco ACI Simulator Getting Started Guide
and follow its instructions for installing the
ACI
Simulator
software.

Guidelines for
Using VMotion with Cisco AVS

Follow the guidelines
in this section for using VMotion with Cisco AVS.

VMotion
Configuration

We recommend
that you configure vMotion on a separate VMkernel NIC with a separate EPG. Do
not configure vMotion on the VMkernel NIC created for the
OpFlex
channel.

We recommend
that you do not delete or change any parameters for the VMkernel NIC created
for the
OpFlex
channel.

Ensure that OpFlex is up on
the destination host. Otherwise the EPG will not be available on the host.

Note

If you delete the
VMkernel NIC created for the
OpFlex
channel by mistake, recreate it with the attach port-group
vtep, and configure it with a dynamic IP address. You
should never configure a static IP address for an OpFlex VMkernel NIC.

VMotion with
Cisco AVS when Using VXLAN Encapsulation

When using Vmotion
with Cisco AVS and using virtual extensible LAN (VXLAN) encapsulation, you must
take into account the following when setting the maximum transmission unit
(MTU).

Using the
default value of 1500 MTU will cause a timeout during VMotion migration to
Cisco AVS. So we recommend an MTU greater than or equal to 1600. However, in
order to optimize performance, the MTU should be set to the maximum allowed
value of 8950.

Cisco AVS will
enforce the physical NIC (PNIC) MTU by fragmenting or segmenting the inner
packet. Any switch in the path, such as Fabric Interconnect, must have an MTU
value greater than or equal to the Cisco AVS PNIC MTU.

The path MTU
between the Virtual Tunnel Endpoint (VTEP) and the fabric must be greater than
Cisco AVS PNIC MTU because reassembly of VXLAN packets is not supported.

Cross-vCenter
VMotion Support

Microsegmentation
with Cisco ACI for Cisco AVS is not supported for cross-vCenter and cross-vDS
VMotion.

Guidelines for
Using Cross-vCenter and Cross-vDS VMotion

The source and
destination VMware vCenter Server instances and ESXi hosts must be running
version 6.0 or later.

The source and
destination vSphere Distributed Switch (vDS) version must be same.

Refer to VMware
documentation for prerequisites for cross-vDS and Cross-VCenter VMotion.

Removing an Existing
Distributed Virtual Switch

You need to remove
the existing distributed virtual switch (DVS) if you want to change the
switching mode on a
Cisco AVS. After you remove the DVS, you can
create a new VMware vCenter domain with the switching mode that you want.

Note

If you remove the host from the
Cisco AVS
while VXLAN load balancing is enabled, make sure to delete all the additional
vmknics for load balancing before you delete the OpFlex vmknic.

Step 1

In VMware
vSphere Web Client, detach all the EPGs that are associated with VMs on the
DVS.

Step 2

Delete the VM
kernel NIC created for OpFlex on all hosts on the DVS.

Step 3

Remove the VMM
domain from the Cisco
APIC.

This step should
automatically cause removal of the DVS from vCenter. However, if you perform
Step 3 without first performing Step 1 and Step 2, you must remove the DVS
manually from the vCenter.

Configuring Layer 4
to Layer 7 Services

For information about
configuring Layer 4 to Layer 7 services on the Cisco AVS, see the
Cisco APIC Layer
4 to Layer 7 Services Deployment Guide.

When you follow
instructions in the
Cisco APIC Layer
4 to Layer 7 Services Deployment Guide, instead of configuring services
on the VMware Distributed Virtual Switch (DVS), configure the services on the
Cisco AVS.

You must install
Cisco AVS
before you can configure Layer 4 to Layer 7 services.

Beginning with Cisco
AVS Release 5.2(1)SV3(1.10), Layer 4 to Layer 7 service graphs are supported
for Cisco AVS. Layer 4 to Layer 7 service graphs for Cisco AVS can be
configured for VMs only and in VLAN mode only. Layer 4 to Layer 7 service
integration is not supported when the service VMs are deployed on a host with
VXLAN encapsulation.

Creating Endpoint
Groups

Intra-EPG
Isolation Enforcement for Cisco AVS

By default,
endpoints with an EPG can communicate with each other without any contracts in
place. However, beginning with Cisco AVS Release 5.2(1)SV3(1.20), you can
isolate endpoints within an EPG from each other. In some instances, you might
want to enforce endpoint isolation within an EPG to prevent a VM with a virus
or other problem from affecting other VMs in the EPG.

You can configure
isolation on endpoints within an application EPG or endpoints within
Microsegmentation EPGs.

You can configure
isolation on all or none of the endpoints within an application or
Microsegmentation EPG; you cannot configure isolation on some endpoints but not
on others.

Isolating endpoints
within an EPG does not affect any contracts that enable the endpoints to
communicate with endpoints in another EPG.

Isolating endpoints
within an EPG will trigger a fault When the EPG is associated with Cisco AVS
domains in VLAN mode.

Note

Traffic will not
work between two endpoints that belong to different EPGs in which isolation is
enforced.

Configuring
Intra-EPG Isolation for Cisco AVS Using the GUI

Follow this
procedure to create an EPG in which the endpoints of the EPG are isolated from
each other.

The port that the
EPG uses must belong to one of the VM Managers (VMMs).

Note

This procedure
assumes that you want to isolate endpoints within an EPG when you create the
EPG. If you want to isolate endpoints within an existing EPG, select the EPG in
Cisco APIC, and in the
Properties pane, in the
Intra
EPG Isolation area, choose
Enforced, and then click
SUBMIT.

Before You Begin

Make sure that Cisco
AVS is in VXLAN mode.

Step 1

Log in to Cisco
APIC, using
Advanced or
Basic mode.

Step 2

Choose
Tenants, expand the folder for the tenant, and then
expand the
Application Profiles folder.

Choosing Statistics
to View for Isolated Endpoints

If you configured
intra-EPG isolation on a Cisco AVS, you need to choose statistics—such as
denied connections, received packets, or transmitted multicast packets—for the
endpoints before you can view them.

Step 1

Log into Cisco
APIC, using
Advanced or
Basic mode.

Step 2

Choose
Tenants > tenant.

Step 3

In the tenant
navigation pane, choose
Application
Profiles > profile > Application
EPGs, and then choose the EPG containing the endpoint
the statistics for which you want to view.

Step 4

In the EPG
Properties work pane, click the
Operational tab to display the endpoints in the EPG.

Step 5

Double-click the
endpoint.

Step 6

In the
Properties dialog box for the endpoint, click the
Stats tab and then click the check icon.

Step 7

In the
Select
Stats dialog box, in the
Available pane, choose the statistics that you want
to view for the endpoint and then use the right-pointing arrow to move them
into the
Selected pane.

Step 8

Click
SUBMIT.

Viewing Statistics
for Isolated Endpoints

If you configured
intra-EPG isolation on a Cisco AVS, once you have chosen statistics for the
endpoints, you can view them.

Before You Begin

You must have chosen
statistics to view for isolated endpoints. See "Choosing Statistics to View for
Isolated Endpoints" in this guide for instructions.

Step 1

Log into Cisco
APIC, using
Advanced or
Basic mode.

Step 2

Choose
Tenants > tenant.

Step 3

In the tenant
navigation pane, choose
Application
Profiles > profile > Application
EPGs, and then choose the EPG containing the endpoint
the statistics for which you want to view.

Step 4

In the EPG
Properties work pane, click the
Operational tab to display the endpoints in the EPG.

Step 5

Double-click the
endpoint.

Step 6

In the
Properties dialog box for the endpoint, click the
Stats tab.

The central pane
displays the statistics that you chose earlier. You can change the view by
clicking the table view or chart view icon on the upper left side of the work
pane.