Prerequisites for Configuration

You must be in a user group associated with a task group that includes the proper task IDs. The command reference guides include the task IDs required for each command. If you suspect user group assignment is preventing you from using a command, contact your AAA administrator for assistance.

Before configuring the nV Edge system, you must have these hardware and software installed in your chassis:

Overview of Cisco ASR 9000 nV Edge Architecture

A Cisco ASR 9000 Series nV Edge consists of two or more Cisco ASR 9000 Series Router chassis that are combined to form a single logical switching or routing entity. You can operate two Cisco ASR 9000 Series Router platforms as a single virtual Cisco ASR 9000 Series system. Effectively, they can logically link two physical chassis with a shared control plane, as if the chassis were two route switch processors (RSPs) within a single chassis. See Figure 1. The blue lines on top shows the internal eobc interconnection and the red lines at the bottom show the data plane interconnection.

As a result, you can double the bandwidth capacity of single nodes and eliminate the need for complex protocol-based high-availability schemes. Hence, you can achieve failover times of less than 50 milliseconds for even the most demanding services and scalability needs.

Figure 1 Cisco ASR 9000 nV Edge Architecture

Note In Cisco IOS XR Software Release 4.2.x, the scalability of nV Edge System is limited to two chassis.

As illustrated in the Figure 2, the two physical chasses are linked using a Layer 1 1-Gbps connection, with RSPs communicating using a Layer 1 or Layer 2 Ethernet out-of-band channel (EOBC) extension to create a single virtual control plane. Each RSP has 2 EOBC ports and with redundant RSPs there will be 4 connections between the chassis.

The Cisco Virtualized Network Architecture combines the nV Edge system with the satellite devices to offer the Satellite nV architecture. For more information on Satellite nV models, see Configuring the Satellite Network Virtualization (nV) System on the Cisco ASR 9000 Series Router chapter.

Inter Rack Links on Cisco ASR 9000 Series nV Edge System

The IRL (Inter Rack Link) connections are required for forwarded traffic going from one chassis out of interface on the other chassis part of the nV edge system. The IRL has to be a 10 GigE link and it has to be direct L1 connections. The IRLs are used for forwarding packets whose ingress and egress interfaces are on separate racks. There can be a maximum of 16 such links between the chassis. A minimum of two links are required and they should be on two separate line cards, for better resiliency in case one line card goes down due to any fault. See Cisco ASR 9000 nV Edge Architecture.

Note For more information on QoS on IRLs, see Cisco ASR 9000 Series Aggregation Services Router Modular QoS Configuration Guide.

Failure Detection in Cisco ASR 9000 Series nV Edge System

In the Cisco ASR 9000 Series nV Edge system, when the Primary DSC node fails, the RSP in the Backup DSC node becomes Primary. It executes the duties of the master RSP that hosts the active set of control plane processes. In a normal scenario of nV Edge System where the Primary and Backup DSC nodes are hosted on separate racks, the failure detection for the Primary DSC happens through communication between the racks.

These mechanisms are used to detect RSP failures across rack boundaries:

FPGA state information detected by the peer RSP in the same chassis is broadcast over the control links. This information is sent if any state change occurs and periodically every 200ms.

The UDLD state of the inter rack control or data links are sent to the remote rack, with failures detected at an interval of 500ms.

A keep-alive message is sent between RSP cards through the inter rack control links, with a failure detection time of 10 seconds.

A Split Brain is a condition where the inter rack links between the routers in a Cisco ASR 9000 Series nV Edge system fails and hence the nodes on both routers start to act as primary node. So, messages are sent between these racks in order to detect Split Brain avoidance. These occur at 200ms intervals across the inter-rack data links.

Scenarios for High Availability

These are some sample scenarios for failure detection:

1. Single RSP Failure in the Primary DSC node - The Standby RSP within the same chassis initially detects the failure through the backplane FPGA. In the event of a failure detection, this RSP transitions to the active state and notifies the Backup DSC node about the failure through the inter-chassis control link messaging.

2. Failure of Primary DSC node and the Standby peer RSP - There are multiple cases where this scenario can occur, such as power-cycle of the Primary DSC rack or simultaneous soft reset of both RSP cards within the Primary rack.

a. The remote rack failure is initially detected by UDLD failure on the inter rack control link. The Backup DSC node checks the UDLD state on the inter rack data link. If the rack failure is confirmed by failure of the data link as well, then the Backup DSC node becomes active.

b. UDLD failure detection occurs every 500ms but the time between control link and data link failure can vary since these are independent failures detected by the RSP and line cards. A windowing period of up to 2 seconds is needed to correlate the control and data link failures and to allow split brain detection messages to be received. The keep-alive messaging between RSPs acts as a redundant detection mechanism, if the UDLD detection fails to detect a reset RSP card.

3. Failure of Inter Rack Control links (Split Brain) - This failure is initially detected by the UDLD protocol on the Inter Rack Control links. In this case, the Backup DSC continues to receive UDLD and keep-alive messages through the inter rack data link. As discussed in the Scenario 2, a windowing period of two seconds is allowed to synchronize between the control and data link failures. If the data link has not failed, or Split Brain packets are received across the Management LAN, then the Backup DSC rack reloads to avoid the split brain condition.

Benefits of Cisco ASR 9000 Series nV Edge System

The Cisco ASR 9000 Series nV Edge system architecture offers these benefits:

1. The Cisco ASR 9000 Series nV Edge System appears as a single switch or router to the neighboring devices.

2. You can logically link two physical chassis with a shared control plane, as if the chassis were two route switch processors (RSPs) within a single chassis. As a result, you can double the bandwidth capacity of single nodes and eliminate the need for complex protocol-based high-availability schemes.

3. You can achieve failover times of less than 50 milliseconds for even the most demanding services and scalability needs.

4. You can manage the cluster as a single entity rather than two entities. Better resiliency is available due to chassis protecting one another.

5. Cisco nV technology allows you to extend Cisco ASR 9000 Series Router system capabilities beyond the physical chassis with remote virtual line cards. These small form-factor (SFF) Cisco ASR 9000v cards can aggregate hundreds of Gigabit Ethernet connections at the access and aggregation layers.

6. You can scale up to thousands of Gigabit Ethernet interfaces without having to separately provision hundreds or thousands of access platforms. This helps you to simplify the network architecture and reduce the operating expenses (OpEx).

7. The multi-chassis capabilities of Cisco IOS XR Software are employed. These capabilities are extended to allow for enhanced chassis resiliency including data plane, control plane, and management plane protection in case of complete failure of any chassis in the Cisco ASR 9000 Series nV Edge System.

8. You can reduce the number of pseudo wires required for achieving pseudowire redundancy.

9. The nV Edge system allows seamless addition of new chassis. There would be no disruption in traffic or control session flap when a chassis is added to the system.

Restrictions of the Cisco ASR 9000 Series nV Edge System

These are some of the restrictions for the Cisco ASR 9000 nV Edge system:

The Cisco ASR 9000 Ethernet linecards and Cisco A9K-SIP-700 line cards are not supported.

Chassis types that are not similar cannot be connected to form an nV edge system.

SFP-GE-S and GLC-SX-MMD are the only Cisco supported SFPs that are allowed for all inter rack connections.

TenGigE SFPs are not supported on EOBC ports.

The nV Edge control plane links have to be direct physical connections and no network or intermediate routing or switching devices are allowed in between.

The nV Edge system does not support mixed speed links.

The nV Edge system does not support the ISM or CGN blade.

Restrictions of the Cisco ASR 9001 Series nV Edge System

Note The restrictions of the Cisco ASR 9000 Series nV Edge System also apply to Cisco ASR 9001 Series nV Edge System.

Split Brain detection and recovery over management links are not supported. Split brain condition occurs when all the internal Ethernet out-of-band channel (EOBC) extension and data plane extension connections between the two chassis fails.

SNMP query or notifications for EOBC links are not supported.

These commands are not supported when the Cisco ASR 9001 Series Router is used as the nV Edge System:

– admin nv edge data allowunsup

– admin nv edge data slowstart

– admin nv edge data stopudld

– admin nv edge data udldpriority

– admin nv edge data udldttltomsg

If the primary rack goes down in an Cisco ASR 9001 Series nV Edge System, then there might be traffic loss of nearly a minute before the secondary rack becomes the primary rack.

Implementing a Cisco ASR 9000 Series nV Edge System

This section explains the implementation of Cisco ASR 9000 Series nV Edge System.

Configuring Cisco ASR 9000 nV Edge System

To bring up the Cisco ASR 9000 nV Cluster, you need to perform these steps outlined in the following subsection.

Single Chassis to Cluster Migration

Consider that there are two individual chassis running Cisco IOS XR Software Release 4.2.x image. Let us refer them as rack0 and rack1 in these steps. If they are already running Cisco IOS XR Software Release 4.2.1 or later, you can avoid the first two steps.

1. You must turbo boot each chassis independently with the Cisco IOS XR Software Release 4.2.1.

2. Upgrade the field programmable devices (FPDs). This step is required because Cisco ASR 9000 Series nV Edge requires at least the RSP rommons to be corresponding to the Cisco IOS XR Software Release 4.2.1.

3. Collect information : You need to know the chassis serial number for each rack that is to be added to the cluster. On an operating system, you can get this from show inventory chassis command. On a system at rommon, you can get the serial number from bpcookie.

6. Boot the RSPs (if you have two) in Rack 1 into the ROMMON mode. Change the ROMMON variables using these commands:

unset CLUSTER_RACK_ID unset CLUSTER_NO_BOOT unset BOOT sync

7. Power down Rack 1.

8. Physically connect the routers. Connect the inter chassis control links on the front panel of the RSP cards (labelled SFP+ 0 and SFP+ 1) together. Rack0-RSP0 connects to Rack1-RSP0, and similarly for RSP1. You can verify the connections once Rack 1 is up using the show nv edge control control-link-protocols loc <> command.

Note You do not need any explicit command for inter-chassis control links and it is on by default.

9. Bring up Rack 1.

10. You must also connect your Interchassis Data links. You must configure it to be interchassis data link interface using the nv edge interface configuration command under the 10 Gigabit Ethernet interface (only 10Gig). Ensure that this configuration on both sides of the inter chassis data link (on rack0 and rack1).

If Bundle-ether is used as the interface, then

– You must include lacp system mac h.h.h in the global configuration mode.

– You must configure mac-addr h.h.h on the Bundle-ether interface.

Note Static MAC on bundle is necessary whether or not the Bundle Ethernet members are sent from the same chassis or a different one.

Note You can verify the Interchassis Data Link operation using the show nv edge data forwarding command.

11. After Rack0 and Rack1 comes up fully with all the RSPs and line cards in XR-RUN state, the show dsc and show redundancy summary commands must have similar command outputs as shown in nV Edge System Configuration: Example section.

Defines redundancy group types such as Node Redundancy group type and Process Redundancy group type.

RFCs

RFCs

Title

None

N.A

Technical Assistance

Description

Link

The Cisco Technical Support website contains thousands of pages of searchable technical content, including links to products, technologies, solutions, technical tips, and tools. Registered Cisco.com users can log in from this page to access even more content.