Implementing MPLS Traffic Engineering

Multiprotocol Label
Switching (MPLS) is a standards-based solution driven by the Internet
Engineering Task Force (IETF) that was devised to convert the Internet and IP
backbones from best-effort networks into business-class transport mediums.

MPLS, with its label
switching capabilities, eliminates the need for an IP route look-up and creates
a virtual circuit (VC) switching function, allowing enterprises the same
performance on their IP-based network services as with those delivered over
traditional networks such as Frame Relay or Asynchronous Transfer Mode (ATM).

MPLS traffic
engineering (MPLS-TE) software enables an MPLS backbone to replicate and expand
upon the TE capabilities of Layer 2 ATM and Frame Relay networks. MPLS is an
integration of Layer 2 and Layer 3 technologies. By making traditional Layer 2
features available to Layer 3, MPLS enables traffic engineering. Thus, you can
offer in a one-tier network what now can be achieved only by overlaying a Layer
3 network on a Layer 2 network.

Prerequisites for Implementing Cisco MPLS Traffic Engineering

These prerequisites are required to implement MPLS TE:

You must be in a user group associated with a task group that
includes the proper task IDs. The command reference guides include the task IDs
required for each command. If you suspect user group assignment is preventing
you from using a command, contact your AAA administrator for assistance.

Router that runs
Cisco IOS XR software.

Installed composite mini-image and the MPLS package, or a full
composite image.

IGP activated.

To configure Point-to-Multipoint (P2MP)-TE, a base set of RSVP and TE configuration parameters on ingress, midpoint, and egress nodes in the MPLS network is required. In addition, Point-to-Point (P2P) parameters are required.

Enable LDP globally by using the mpls ldp command to allocate local labels even in RSVP (MPLS TE) only core. You do not have to specify any interface if the core is LDP free.

Overview of MPLS Traffic Engineering

MPLS-TE software enables an MPLS backbone to replicate and expand upon the traffic engineering capabilities of Layer 2 ATM and Frame Relay networks. MPLS is an integration of Layer 2 and Layer 3 technologies. By making traditional Layer 2 features available to Layer 3, MPLS enables traffic engineering. Thus, you can offer in a one-tier network what now can be achieved only by overlaying a Layer 3 network on a Layer 2 network.

MPLS-TE is essential for service provider and Internet service provider (ISP) backbones. Such backbones must support a high use of transmission capacity, and the networks must be very resilient so that they can withstand link or node failures. MPLS-TE provides an integrated approach to traffic engineering. With MPLS, traffic engineering capabilities are integrated into Layer 3, which optimizes the routing of IP traffic, given the constraints imposed by backbone capacity and topology.

Benefits of MPLS Traffic Engineering

MPLS-TE enables ISPs to route network traffic to offer the best service to their users in terms of throughput and delay. By making the service provider more efficient, traffic engineering reduces the cost of the network.

Currently, some ISPs base their services on an overlay model. In the overlay model, transmission facilities are managed by Layer 2 switching. The routers see only a fully meshed virtual topology, making most destinations appear one hop away. If you use the explicit Layer 2 transit layer, you can precisely control how traffic uses available bandwidth. However, the overlay model has numerous disadvantages. MPLS-TE achieves the TE benefits of the overlay model without running a separate network and without a non-scalable, full mesh of router interconnects.

How MPLS-TE Works

MPLS-TE automatically establishes and maintains label switched paths
(LSPs) across the backbone by using RSVP. The
path that an LSP uses is determined by the LSP resource requirements and
network resources, such as bandwidth. Available resources are flooded by means
of extensions to a link-state-based Interior Gateway Protocol (IGP).

MPLS-TE tunnels are calculated at the LSP headend router, based on a fit
between the required and available resources (constraint-based routing). The
IGP automatically routes the traffic to these LSPs.

Typically, a packet crossing the MPLS-TE backbone travels on a single
LSP that connects the ingress point to the egress point. MPLS-TE is built on
these mechanisms:

Tunnel interfaces

From a Layer 2 standpoint, an MPLS tunnel interface represents the
headend of an LSP. It is configured with a set of resource requirements, such
as bandwidth and media requirements, and priority. From a Layer 3 standpoint,
an LSP tunnel interface is the headend of a unidirectional virtual link to the
tunnel destination.

MPLS-TE path calculation module

This calculation module operates at the LSP headend. The module
determines a path to use for an LSP. The path calculation uses a link-state
database containing flooded topology and resource information.

RSVP with TE extensions

RSVP operates at each LSP hop and is used to signal and maintain
LSPs based on the calculated path.

MPLS-TE link management module

This module operates at each LSP hop, performs link call admission
on the RSVP signaling messages, and performs bookkeeping on topology and
resource information to be flooded.

Link-state IGP (Intermediate System-to-Intermediate System [IS-IS]
or Open Shortest Path First [OSPF]—each with traffic engineering extensions)

These IGPs are used to globally flood topology and resource
information from the link management module.

Enhancements to the shortest path first (SPF) calculation used by
the link-state IGP (IS-IS or OSPF)

The IGP automatically routes traffic to the appropriate LSP
tunnel, based on tunnel destination. Static routes can also be used to direct
traffic to LSP tunnels.

Label switching forwarding

This forwarding mechanism provides routers with a Layer 2-like
ability to direct traffic across multiple hops of the LSP established by RSVP
signaling.

One approach to engineering a backbone is to define a mesh of tunnels
from every ingress device to every egress device. The MPLS-TE path calculation
and signaling modules determine the path taken by the LSPs for these tunnels,
subject to resource availability and the dynamic state of the network.

The IGP (operating at an ingress device) determines which traffic should
go to which egress device, and steers that traffic into the tunnel from ingress
to egress. A flow from an ingress device to an egress device might be so large
that it cannot fit over a single link, so it cannot be carried by a single
tunnel. In this case, multiple tunnels between a given ingress and egress can
be configured, and the flow is distributed using load sharing among the
tunnels.

Backup AutoTunnels

The MPLS Traffic Engineering AutoTunnel Backup feature enables a router to dynamically build backup tunnels on the interfaces that are configured with MPLS TE tunnels. This feature enables a router to dynamically build backup tunnels when they are needed. This prevents you from having to build MPLS TE tunnels statically.

The MPLS Traffic Engineering (TE)—AutoTunnel Backup feature has these benefits:

Backup tunnels are built automatically, eliminating the need for users to preconfigure each backup tunnel and then assign the backup tunnel to the protected interface.

Protection is expanded—FRR does not protect IP traffic that is not using the TE tunnel or Label Distribution Protocol (LDP) labels that are not using the TE tunnel.

Link Protection

The backup tunnels that bypass only a single link of the LSP path provide link protection. They protect LSPs, if a link along their path fails, by rerouting the LSP traffic to the next hop, thereby bypassing the failed link. These are referred to as NHOP backup tunnels because they terminate at the LSP's next hop beyond the point of failure.

This figure illustrates link protection.

Figure 1. Link Protection

Node Protection

The backup tunnels that bypass next-hop nodes along LSP paths are called NNHOP backup tunnels because they terminate at the node following the next-hop node of the LSPs, thereby bypassing the next-hop node. They protect LSPs by enabling the node upstream of a link or node failure to reroute the LSPs and their traffic around a node failure to the next-hop node. NNHOP backup tunnels also provide protection from link failures because they bypass the failed link and the node.

This figure illustrates node protection.

Figure 2. Node Protection

Backup AutoTunnel Assignment

At the head or mid points of a tunnel, the backup assignment finds an appropriate backup to protect a given primary tunnel for FRR protection.

The backup assignment logic is performed differently based on the type of backup configured on the output interface used by the primary tunnel. Configured backup types are:

Static Backup

AutoTunnel Backup

No Backup (In this case no backup assignment is performed and the tunnels is unprotected.)

Note

Static backup and Backup AutoTunnel cannot exist together on the same interface or link.

Note

Node protection is always preferred over link protection in the Backup AutoTunnel assignment.

In order that the Backup AutoTunnel feature operates successfully, the following configuration must be applied at global configuration level:

ipv4 unnumbered mpls traffic-eng Loopback 0

Note

The Loopback 0 is used as router ID.

Explicit Paths

Explicit paths are used to create backup autotunnels as follows:

For NHOP Backup Autotunnels:

NHOP excludes the protected link's local IP address.

NHOP excludes the protected link’s remote IP address.

The explicit-path name is _autob_nhop_tunnelxxx, where xxx matches the dynamically created backup tunnel ID.

NNHOP excludes the NHOP router ID of the protected primary tunnel next hop.

The explicit-path name is _autob_nnhop_tunnelxxx, where xxx matches the dynamically created backup tunnel ID.

Periodic Backup Promotion

The periodic backup promotion attempts to find and assign a better backup for primary tunnels that are already protected.

With AutoTunnel Backup, the only scenario where two backups can protect the same primary tunnel is when both an NHOP and NNHOP AutoTunnel Backups get created. The backup assignment takes place as soon as the NHOP and NNHOP backup tunnels come up. So, there is no need to wait for the periodic promotion.

Although there is no exception for AutoTunnel Backups, periodic backup promotion has no impact on primary tunnels protected by AutoTunnel Backup.

One exception is when a manual promotion is triggered by the user using the mpls traffic-eng fast-reroute timers promotion command, where backup assignment or promotion is triggered on all FRR protected primary tunnels--even unprotected ones. This may trigger the immediate creation of some AutoTunnel Backup, if the command is entered within the time window when a required AutoTunnel Backup has not been yet created.

You can configure the periodic promotion timer using the global configuration mpls traffic-eng fast-reroute timers promotion sec command. The range is 0 to 604800 seconds.

Note

A value of 0 for the periodic promotion timer disables the periodic promotion.

Protocol-Based CLI

Cisco IOS XR software
provides a protocol-based command line interface. The CLI provides commands
that can be used with the multiple IGP protocols supported by MPLS-TE.

Differentiated Services Traffic Engineering

MPLS Differentiated Services (Diff-Serv) Aware Traffic Engineering
(DS-TE) is an extension of the regular MPLS-TE feature. Regular traffic
engineering does not provide bandwidth guarantees to different traffic classes.
A single bandwidth constraint is used in regular TE that is shared by all
traffic. To support various classes of service (CoS), users can configure
multiple bandwidth constraints. These bandwidth constraints can be treated
differently based on the requirement for the traffic class using that
constraint.

MPLS DS-TE provides the ability to configure multiple bandwidth
constraints on an MPLS-enabled interface. Available bandwidths from all
configured bandwidth constraints are advertised using IGP. TE tunnel is
configured with bandwidth value and class-type requirements. Path calculation
and admission control take the bandwidth and class-type into consideration.
RSVP is used to signal the TE tunnel with bandwidth and class-type
requirements.

Prestandard DS-TE Mode

Prestandard DS-TE uses the Cisco proprietary mechanisms for RSVP signaling and IGP advertisements. This DS-TE mode does not interoperate with third-party vendor equipment. Note that prestandard DS-TE is enabled only after configuring the sub-pool bandwidth values on MPLS-enabled interfaces.

IETF DS-TE Mode

IETF mode supports multiple bandwidth constraint models, including RDM and MAM, both with two bandwidth pools. In an IETF DS-TE network, identical bandwidth constraint models must be configured on all nodes.

TE class map is used with IETF DS-TE mode and must be configured the same way on all nodes in the network.

Bandwidth Constraint Models

IETF DS-TE mode provides support for the RDM and MAM bandwidth constraints models. Both models support up to two bandwidth pools.

Cisco IOS XR software provides global configuration for the switching between bandwidth constraint models. Both models can be configured on a single interface to preconfigure the bandwidth constraints before swapping to an alternate bandwidth constraint model.

Note

NSF is not guaranteed when you change the bandwidth constraint model or configuration information.

By default, RDM is the default bandwidth constraint model used in both pre-standard and IETF mode.

Russian Doll Bandwidth Constraint Model

Ensures bandwidth efficiency simultaneously and protection against QoS degradation of all class types.

Specifies that it is used in conjunction with preemption to simultaneously achieve isolation across class-types such that each class-type is guaranteed its share of bandwidth, bandwidth efficiency, and protection against QoS degradation of all class types.

Note

We recommend that RDM not be used in DS-TE environments in which the use of preemption is precluded. Although RDM ensures bandwidth efficiency and protection against QoS degradation of class types, it does guarantee isolation across class types.

TE Class Mapping

Each of the eight available bandwidth values advertised in the IGP
corresponds to a TE class. Because the IGP advertises only eight bandwidth
values, there can be a maximum of only eight TE classes supported in an IETF
DS-TE network.

TE class mapping must be exactly the same on all routers in a DS-TE
domain. It is the responsibility of the operator configure these settings
properly as there is no way to automatically check or enforce consistency.

The operator must configure TE tunnel class types and priority levels to
form a valid TE class. When the TE class map configuration is changed, tunnels
already up are brought down. Tunnels in the down state, can be set up if a
valid TE class map is found.

The default TE class and attributes are listed. The default mapping
includes four class types.

Table 1 TE Classes and Priority

TE Class

Class Type

Priority

0

0

7

1

1

7

2

Unused

—

3

Unused

—

4

0

0

5

1

0

6

Unused

—

7

Unused

—

Flooding

Available bandwidth in all configured bandwidth pools is flooded on the network to calculate accurate constraint paths when a new TE tunnel is configured. Flooding uses IGP protocol extensions and mechanisms to determine when to flood the network with bandwidth.

Flooding Triggers

TE Link Management (TE-Link) notifies IGP for both global pool and sub-pool available bandwidth and maximum bandwidth to flood the network in these events:

Periodic timer expires (this does not depend on bandwidth pool type).

Tunnel origination node has out-of-date information for either available global pool or sub-pool bandwidth, causing tunnel admission failure at the midpoint.

Consumed bandwidth crosses user-configured thresholds. The same threshold is used for both global pool and sub-pool. If one bandwidth crosses the threshold, both bandwidths are flooded.

Flooding Thresholds

Flooding frequently can burden a network because all routers must send out and process these updates. Infrequent flooding causes tunnel heads (tunnel-originating nodes) to have out-of-date information, causing tunnel admission to fail at the midpoints.

You can control the frequency of flooding by configuring a set of thresholds. When locked bandwidth (at one or more priority levels) crosses one of these thresholds, flooding is triggered.

Thresholds apply to a percentage of the maximum available bandwidth (the global pool), which is locked, and the percentage of maximum available guaranteed bandwidth (the sub-pool), which is locked. If, for one or more priority levels, either of these percentages crosses a threshold, flooding is triggered.

Note

Setting up a global pool TE tunnel can cause the locked bandwidth allocated to sub-pool tunnels to be reduced (and hence to cross a threshold). A sub-pool TE tunnel setup can similarly cause the locked bandwidth for global pool TE tunnels to cross a threshold. Thus, sub-pool TE and global pool TE tunnels can affect each other when flooding is triggered by thresholds.

Fast Reroute

Fast Reroute (FRR) provides link protection to LSPs enabling the traffic carried by LSPs that encounter a failed link to be rerouted around the failure. The reroute decision is controlled locally by the router connected to the failed link. The headend router on the tunnel is notified of the link failure through IGP or through RSVP. When it is notified of a link failure, the headend router attempts to establish a new LSP that bypasses the failure. This provides a path to reestablish links that fail, providing protection to data transfer.

FRR (link or node) is supported over sub-pool tunnels the same way as for regular TE tunnels. In particular, when link protection is activated for a given link, TE tunnels eligible for FRR are redirected into the protection LSP, regardless of whether they are sub-pool or global pool tunnels.

Note

The ability to configure FRR on a per-LSP basis makes it possible to provide different levels of fast restoration to tunnels from different bandwidth pools.

You should be aware of these requirements for the backup tunnel path:

Backup tunnel must not pass through the element it protects.

Primary tunnel and a backup tunnel should intersect at least at two points (nodes) on the path: point of local repair (PLR) and merge point (MP). PLR is the headend of the backup tunnel, and MP is the tailend of the backup tunnel.

Note

When you configure TE tunnel with multiple protection on its path and merge point is the same node for more than one protection, you must configure record-route for that tunnel.

IS-IS IP Fast Reroute Loop-free Alternative

For bandwidth protection, there must be sufficient backup bandwidth
available to carry primary tunnel traffic. Use the
ipfrr lfa command to compute loop-free alternates for all links or
neighbors in the event of a link or node failure. To enable node protection on
broadcast links, IPRR and bidirectional forwarding detection (BFD) must be
enabled on the interface under IS-IS.

Note

MPLS FRR and IPFRR cannot be configured on the same interface at the
same time.

For information about configuring BFD, see
Cisco IOS XR Interface and Hardware Configuration Guide for the
Cisco CRS-1 Router.

MPLS-TE and Fast Reroute over Link Bundles

MPLS Traffic Engineering (TE) and Fast Reroute (FRR) are supported
over bundle interfaces. MPLS-TE/FRR over virtual local area network (VLAN) interfaces is supported. Bidirectional forwarding detection (BFD) over VLAN is used as an FRR trigger to obtain less than 50 milliseconds of switchover time.

The Ignore Intermediate System-to-Intermediate System (IS-IS) overload bit avoidance feature allows network administrators to prevent RSVP-TE label switched paths (LSPs) from being disabled, when a router in that path has its Intermediate System-to-Intermediate System (IS-IS) overload bit set.

The IS-IS overload bit avoidance feature is activated using this command:

mpls traffic-eng path-selection ignore overload

The IS-IS overload bit avoidance feature is deactivated using the no form of this command:

no mpls traffic-eng path-selection ignore overload

When the IS-IS overload bit avoidance feature is activated, all nodes, including head nodes, mid nodes, and tail nodes, with the overload bit set, are ignored. This means that they are still available for use with RSVP-TE label switched paths (LSPs). This feature enables you to include an overloaded node in CSPF.

Enhancement Options of IS-IS OLA

You can restrict configuring IS-IS overload bit avoidance with the following enhancement options:

path-selection ignore overload head
The tunnels stay up if set-overload-bit is set by IS-IS on the head router. Ignores overload during CSPF for LSPs originating from an overloaded node. In all other cases (mid, tail, or both), the tunnel stays down.

path-selection ignore overload mid
The tunnels stay up if set-overload-bit is set by IS-IS on the mid router. Ignores overload during CSPF for LSPs transiting from an overloaded node. In all other cases (head, tail, or both), the tunnel stays down.

path-selection ignore overload tail
The tunnels stay up if set-overload-bit is set by IS-IS on the tail router. Ignores overload during CSPF for LSPs terminating at an overloaded node. In all other cases (head, mid, or both), the tunnel stays down.

path-selection ignore overload
The tunnels stay up irrespective of on which router the set-overload-bit is set by IS-IS.

Note

When you do not select any of the options, including head nodes, mid nodes, and tail nodes, you get a behavior that is applicable to all nodes. This behavior is backward compatible in nature.

GMPLS Benefits

GMPLS bridges the IP and photonic layers, thereby making possible interoperable and scalable parallel growth in the IP and photonic dimensions.

This allows for rapid service deployment and operational efficiencies, as well as for increased revenue opportunities. A smooth transition becomes possible from a traditional segregated transport and service overlay model to a more unified peer model.

By streamlining support for multiplexing and switching in a hierarchical fashion, and by utilizing the flexible intelligence of MPLS-TE, optical switching GMPLS becomes very helpful for service providers wanting to manage large volumes of traffic in a cost-efficient manner.

GMPLS Protection and Restoration

GMPLS provides protection against failed channels (or links) between two adjacent nodes (span protection) and end-to-end dedicated protection (path protection). After the route is computed, signaling to establish the backup paths is carried out through RSVP-TE or CR-LDP. For span protection, 1+1 or M:N protection schemes are provided by establishing secondary paths through the network. In addition, you can use signaling messages to switch from the failed primary path to the secondary path.

Note

Only 1:1 end-to-end path protection is supported.

The restoration of a failed path refers to the dynamic establishment of a backup path. This process requires the dynamic allocation of resources and route calculation. The following restoration methods are described:

Line restoration—Finds an alternate route at an intermediate node.

Path restoration—Initiates at the source node to route around a failed path within the path for a specific LSP.

Restoration schemes provide more bandwidth usage, because they do not preallocate any resource for an LSP.

GMPLS combines MPLS-FRR and other types of protection, such as SONET/SDH and wavelength.

In addition to SONET alarms in POS links, protection and restoration is also triggered by bidirectional forwarding detection (BFD).

1:1 LSP Protection

When one specific protecting LSP or span protects one specific working
LSP or span, 1:1 protection scheme occurs. However, normal traffic is
transmitted only over one LSP at a time for working or recovery.

1:1 protection with extra traffic refers to the scheme in which extra
traffic is carried over a protecting LSP when the protecting LSP is not being
used for the recovery of normal traffic. For example, the protecting LSP is in
standby mode. When the protecting LSP is required to recover normal traffic
from the failed working LSP, the extra traffic is preempted. Extra traffic is
not protected, but it can be restored. Extra traffic is transported using the
protected LSP resources.

Shared Mesh Restoration and M:N Path Protection

Both shared mesh restoration and M:N (1:N is more practical) path protection offers sharing for protection resources for multiple working LSPs. For 1:N protection, a specific protecting LSP is dedicated to the protection of up to N working LSPs and spans. Shared mesh is defined as preplanned LSP rerouting, which reduces the restoration resource requirements by allowing multiple restoration LSPs to be initiated from distinct ingress nodes to share common resources, such as links and nodes.

End-to-end Recovery

End-to-end recovery refers to an entire LSP from the source for an ingress router endpoint to the destination for an egress router endpoint.

GMPLS Protection Requirements

The GMPLS protection requirements are specific to the protection scheme that is enabled at the data plane. For example, SONET APS or MPLS-FRR are identified as the data level for GMPLS protection.

GMPLS Prerequisites

The following prerequisites are required to implement GMPLS on
Cisco IOS XR software:

You must be in a user group associated with a task group that
includes the proper task IDs for GMPLS commands.

Flexible Name-based Tunnel Constraints

In the traditional TE scheme, links are configured with attribute-flags that are flooded with TE link-state parameters using Interior Gateway Protocols (IGPs), such as Open Shortest Path First (OSPF).

MPLS-TE Flexible Name-based Tunnel Constraints lets you assign, or map, up to 32 color names for affinity and attribute-flag attributes instead of 32-bit hexadecimal numbers. After mappings are defined, the attributes can be referred to by the corresponding color name in the command-line interface (CLI). Furthermore, you can define constraints using include, include-strict, exclude, and exclude-all arguments, where each statement can contain up to 10 colors, and define include constraints in both loose and strict sense.

Note

You can configure affinity constraints using attribute flags or the Flexible Name Based Tunnel Constraints scheme; however, when configurations for both schemes exist, only the configuration pertaining to the new scheme is applied.

Interarea support allows the configuration of a TE LSP that spans
multiple areas, where its headend and tailend label switched routers (LSRs)
reside in different IGP areas.

Multiarea and Interarea TE are required by the customers running
multiple IGP area backbones (primarily for scalability reasons). This lets you
limit the amount of flooded information, reduces the SPF duration, and lessens
the impact of a link or node failure within an area, particularly with large
WAN backbones split in multiple areas.

Multiarea Support

Multiarea support allows an area border router (ABR) LSR to support MPLS-TE in more than one
IGP area. A TE LSP is still confined to a single area.

Multiarea and Interarea TE are required when you run multiple IGP area
backbones. The Multiarea and Interarea TE allows you to:

Limit the volume of flooded information.

Reduce the SPF duration.

Decrease the impact of a link or node failure within an area.

Figure 4. Interlevel (IS-IS) TE Network

As shown in
the figure,
R2, R3, R7, and R4 maintain two databases for routing and TE information. For
example, R3 has TE topology information related to R2, flooded through Level-1
IS-IS LSPs plus the TE topology information related to R4, R9, and R7, flooded
as Level 2 IS-IS Link State PDUs (LSPs) (plus, its own IS-IS LSP).

Note

You can configure multiple areas within an IS-IS Level 1. This is
transparent to TE. TE has topology information about the IS-IS level, but not
the area ID.

Loose Hop Expansion

Loose hop optimization allows the reoptimization of tunnels spanning multiple areas and solves the problem which occurs when an MPLS-TE LSP traverses hops that are not in the LSP's headend's OSPF area and IS-IS level.

Interarea MPLS-TE allows you to configure an interarea traffic engineering (TE) label switched path
(LSP)
by specifying a loose source route of ABRs along the path. It is the then the responsibility of the ABR (having a complete view of both areas) to find a path obeying the TE LSP constraints within the next area to reach the next hop ABR (as specified on the headend). The same operation is performed by the last ABR connected to the tailend area to reach the tailend LSR.

You must be aware of these considerations when using loose hop optimization:

You must specify the router ID of the ABR node (as opposed to a link address on the ABR).

When multiarea is deployed in a network that contains subareas, you must enable MPLS-TE in the subarea for TE to find a path when loose hop is specified.

You must specify the reachable explicit path for the interarea tunnel.

Loose Hop Reoptimization

Loose hop reoptimization allows the reoptimization of the tunnels spanning multiple areas and solves the problem which occurs when an MPLS-TE headend does not have visibility into other IGP areas.

Whenever the headend attempts to reoptimize a tunnel, it tries to find a better path to the ABR in the headend area. If a better path is found then the headend initiates the setup of a new LSP. In case a suitable path is not found in the headend area, the headend initiates a querying message. The purpose of this message is to query the ABRs in the areas other than the headend area to check if there exist any better paths in those areas. The purpose of this message is to query the ABRs in the areas other than the headend area, to check if a better path exists. If a better path does not exist, ABR forwards the query to the next router downstream. Alternatively, if better path is found, ABR responds with a special Path Error to the headend to indicate the existence of a better path outside the headend area. Upon receiving the Path Error that indicates the existence of a better path, the headend router initiates the reoptimization.

ABR Node Protection

Because one IGP area does not have visibility into another IGP area, it is not possible to assign backup to protect ABR node. To overcome this problem, node ID sub-object is added into the record route object of the primary tunnel so that at a PLR node, backup destination address can be checked against primary tunnel record-route object and assign a backup tunnel.

Fast Reroute Node Protection

If a link failure occurs within an area, the upstream router directly connected to the failed link generates an RSVP path error message to the headend. As a response to the message, the headend sends an RSVP path tear message and the corresponding path option is marked as invalid for a specified period and the next path-option (if any) is evaluated.

To retry the ABR immediately, a second path option (identical to the first one) should be configured. Alternatively, the retry period (path-option hold-down, 2 minutes by default) can be tuned to achieve a faster retry.

MPLS-TE Forwarding Adjacency

The MPLS-TE Forwarding Adjacency feature allows a network administrator to handle a traffic engineering, label-switched path (LSP) tunnel as a link in an Interior Gateway Protocol (IGP) network based on the Shortest Path First (SPF) algorithm. A forwarding adjacency can be created between routers regardless of their location in the network.

MPLS-TE Forwarding Adjacency Benefits

TE tunnel interfaces are advertised in the IGP network just like any other links. Routers can then use these advertisements in their IGPs to compute the SPF even if they are not the head end of any TE tunnels.

MPLS-TE Forwarding Adjacency Prerequisites

Your network must support the following features before enabling the MPLS -TE Forwarding Adjacency feature:

MPLS

IP Cisco Express Forwarding

Intermediate System-to-Intermediate System (IS-IS)

Unequal Load Balancing

Unequal load balancing permits the routing of unequal proportions of
traffic through tunnels to a common destination. Load shares on tunnels to the
same destination are determined by TE from the tunnel configuration and passed
through the MPLS Label Switching Database (LSD) to the Forwarding Information
Base (FIB).

Note

Load share values are renormalized by the FIB using values suitable
for use by the forwarding code. The exact traffic ratios observed may not,
therefore, exactly mirror the configured traffic ratios. This effect is more
pronounced if there are many parallel tunnels to a destination, or if the load
shares assigned to those tunnels are very different. The exact renormalization
algorithm used is platform-dependent.

There are two ways to configure load balancing:

Explicit configuration

Using this method, load shares are explicitly configured on each
tunnel.

Bandwidth configuration

If a tunnel is not configured with load-sharing parameters, the
tunnel bandwidth and load-share values are considered equivalent for load-share
calculations between tunnels, and a direct comparison between bandwidth and
load-share configuration values is calculated.

Note

Load shares are not dependent on any configuration other than the load
share and bandwidth configured on the tunnel and the state of the global
configuration switch.

Path Computation Element

Path Computation Element (PCE) solves the specific issue of inter-domain
path computation for MPLS-TE label switched path (LSPs), when the head-end router does not possess
full network topology information (for example, when the head-end and tail-end
routers of an LSP reside in different IGP areas).

PCE uses area border routers (ABRs) to compute a TE LSP spanning
multiple IGP areas as well as computation of Inter-AS TE LSP.

PCE is usually used to define an overall architecture, which is made of
several components, as follows:

Path Computation Element (PCE)

Represents a software module (which can be a component or
application) that enables the router to compute paths applying a set of
constraints between any pair of nodes within the router’s TE topology database.
PCEs are discovered through IGP.

Path Computation Client (PCC)

Represents a software module running on a router that is capable
of sending and receiving path computation requests and responses to and from
PCEs. The PCC is typically an LSR (Label Switching Router).

PCC-PCE communication protocol (PCEP)

Specifies that PCEP is a TCP-based protocol defined by the IETF
PCE WG, and defines a set of messages and objects used to manage PCEP sessions
and to request and send paths for multi-domain TE LSPs. PCEP is used for
communication between PCC and PCE (as well as between two PCEs) and employs IGP
extensions to dynamically discover PCE.

Policy-Based Tunnel Selection

Policy-Based Tunnel Selection (PBTS) provides a mechanism that lets you direct traffic into specific TE
tunnels based on different criteria. PBTS will benefit Internet service
providers (ISPs) who carry voice and data traffic through their MPLS and
MPLS/VPN networks, who want to route this traffic to provide optimized voice
service.

PBTS works by selecting tunnels based on the classification criteria of
the incoming packets, which are based on the IP precedence, experimental (EXP), or type of service (ToS) field
in the packet. When there are no paths with a default class configured, this
traffic is forwarded using the paths with the lowest class value.

PBTS Restrictions

When QoS EXP remarking on an interface is enabled, the EXP value is
used to determine the egress tunnel interface, not the incoming EXP value.

Egress-side remarking does not affect PBTS tunnel selection.

Path Protection

Path protection provides an end-to-end failure recovery mechanism (that is, a full path protection) for MPLS-TE tunnels. A secondary Label Switched Path (LSP) is established, in advance, to provide failure protection for the protected LSP that is carrying a tunnel's TE traffic. When there is a failure on the protected LSP, the source router immediately enables the secondary LSP to temporarily carry the tunnel's traffic. If there is a failure on the secondary LSP, the tunnel no longer has path protection until the failure along the secondary path is cleared. Path protection can be used within a single area (OSPF or IS-IS), external BGP [eBGP], and static routes.

The failure detection mechanisms triggers a switchover to a secondary tunnel by:

Notification from the Bidirectional Forwarding Detection (BFD) protocol that a neighbor is lost

Notification from the Interior Gateway Protocol (IGP) that the adjacency is down

Local teardown of the protected tunnel's LSP due to preemption in order to signal higher priority LSPs, a Packet over SONET (POS) alarm, online insertion and removal (OIR), and so on

An alternate recovery mechanism is Fast Reroute (FRR), which protects MPLS-TE LSPs only from link and node failures, by locally repairing the LSPs at the point of failure.

Although not as fast as link or node protection, presignaling a secondary LSP is faster than configuring a secondary primary path option, or allowing the tunnel's source router to dynamically recalculate a path. The actual recovery time is topology-dependent, and affected by delay factors such as propagation delay or switch fabric latency.

Explicit path option can be configured for the path protected TE with the secondary path option as dynamic.

Do not use link and node protection with path protection on the headend router.

A maximum number of path protected tunnel TE heads is 2000.

A maximum number of TE tunnel heads is equal to 4000.

When path protection is enabled for a tunnel, and the primary label switched path (LSP) is not assigned a backup tunnel, but the standby LSP is assigned fast-reroute (FRR), the MPLS TE FRR protected value displayed is different from the Cisco express forwarding (CEF) fast-reroute value.

MPLS-TE Automatic Bandwidth Overview

MPLS-TE automatic bandwidth is configured on individual Label Switched
Paths (LSPs) at every head-end. MPLS-TE monitors the traffic rate on a tunnel
interface. Periodically, MPLS-TE resizes the bandwidth on the tunnel interface
to align it closely with the traffic in the tunnel. MPLS-TE automatic bandwidth can
perform these functions:

Monitors periodic polling of the tunnel output rate

Resizes the tunnel bandwidth by adjusting the highest rate observed
during a given period

For every traffic-engineered tunnel that is configured for an automatic
bandwidth, the average output rate is sampled, based on various configurable
parameters. Then, the tunnel bandwidth is readjusted automatically based upon
either the largest average output rate that was noticed during a certain interval, or
a configured maximum bandwidth value.

This table lists the automatic bandwidth functions.

Table 2 Automatic Bandwidth Variables

Function

Command

Description

Default Value

Application frequency

application command

Configures how often the tunnel bandwidths changed for each tunnel. The application period is the period of A minutes between the
bandwidth applications during which the output rate collection is done.

24 hours

Requested bandwidth

bw-limit command

Limits the range of bandwidth within the automatic-bandwidth feature that can
request a bandwidth.

0 Kbps

Collection frequency

auto-bw collect command

Configures how often the tunnel output rate is polled globally
for all tunnels.

5 min

Highest collected bandwidth

—

You cannot configure this value.

—

Delta

—

You cannot configure this value.

—

The output rate on a tunnel is collected at regular intervals that are
configured by using the
application command in MPLS-TE auto bandwidth
interface configuration mode. When the application period timer expires, and
when the difference between the measured and the current bandwidth exceeds the
adjustment threshold, the tunnel is reoptimized. Then, the bandwidth samples
are cleared to record the new largest output rate at the next interval.

When reoptimizing the LSP with the new bandwidth, a new path request is
generated. If the new bandwidth is not available, the last good LSP
continues to be used. This way, the network experiences no traffic
interruptions.

If minimum or maximum bandwidth values are configured for a tunnel, the
bandwidth, which the automatic bandwidth signals, stays within these values.

Note

When more than 100 tunnels are auto-bw enabled, the algorithm will
jitter the first application of every tunnel by a maximum of 20% (max
1hour). The algorithm does this to avoid too many tunnels running auto bandwidth applications at the
same time.

If a tunnel is shut down, and is later brought again, the adjusted
bandwidth is lost and the tunnel is brought back with the initial configured
bandwidth. In addition, the application period is reset when the tunnel is brought
back.

Adjustment Threshold

Adjustment Threshold is defined as a percentage of the current tunnel bandwidth and an absolute (minimum) bandwidth. Both thresholds must be fulfilled for the automatic bandwidth to resignal the tunnel. The tunnel bandwidth is resized only if the difference between the largest sample output rate and the current tunnel bandwidth is larger than the adjustment thresholds.

For example, assume that the automatic bandwidth is enabled on a tunnel in which the highest observed bandwidth B is 30 Mbps. Also, assume that the tunnel was initially configured for 45 Mbps. Therefore, the difference is 15 mbit/s. Now, assuming the default adjustment thresholds of 10% and 10kbps, the tunnel is signalled with 30 Mbps when the application timer expires. This is because 10% of 45Mbit/s is 4.5 Mbit/s, which is smaller than 15 Mbit/s. The absolute threshold, which by default is 10kbps, is also crossed.

Overflow Detection

Overflow detection is used if a bandwidth must be resized as soon as an overflow condition is detected, without having to wait for the expiry of an automatic bandwidth application frequency interval.

For overflow detection one configures a limit N, a percentage threshold Y% and optionally, a minimum bandwidth threshold Z. The percentage threshold is defined as the percentage of the actual signalled tunnel bandwidth. When the difference between the measured bandwidth and the actual bandwidth are both larger than Y% and Z threshold, for N consecutive times, then the system triggers an overflow detection.

The bandwidth adjustment by the overflow detection is triggered only by an increase of traffic volume through the tunnel, and not by a decrease in the traffic volume. When you trigger an overflow detection, the automatic bandwidth application interval is reset.

By default, the overflow detection is disabled and needs to be manually configured.

Restrictions for MPLS-TE Automatic Bandwidth

When the automatic bandwidth cannot update the tunnel bandwidth, the
following restrictions are listed:

Tunnel is in a fast reroute (FRR) backup, active, or path protect
active state. This occurs because of the assumption that protection is a
temporary state, and there is no need to reserve the bandwidth on a backup
tunnel. You should prevent taking away the bandwidth from other primary or
backup tunnels.

Reoptimization fails to occur during a lockdown. In this case, the
automatic bandwidth does not update the bandwidth unless the bandwidth
application is manually triggered by using the
mpls traffic-eng auto-bw apply command in
EXEC mode.

By using RSVP-TE extensions as defined in RFC 4875, multiple subLSPs are signaled for a given TE source. The P2MP tunnel is considered as a set of Source-to-Leaf (S2L) subLSPs that connect the TE source to multiple leaf Provider Edge (PE) nodes.

At the TE source, the ingress point of the P2MP-TE tunnel, IP multicast traffic is encapsulated with a unique MPLS label, which is associated with the P2MP-TE tunnel. The traffic continues to be label-switched in the P2MP tree. If needed, the labeled packet is replicated at branch nodes along the P2MP tree. When the labeled packet reaches the egress leaf (PE) node, the MPLS label is removed and forwarded onto the IP multicast tree across the PE-CE link.

To enable end-to-end IP multicast connectivity, RSVP is used in the MPLS-core for P2MP-TE signaling and PIM is used for PE-CE link signaling.

In the MPLS network, RSVP P2MP-TE replaces PIM as the tree building mechanism, RSVP-TE grafts or prunes a given P2MP tree when the end-points are added or removed in the TE source configuration (explicit user operation).

These are the definitions for Point-to-Multipoint (P2MP) tunnels:

Source

Configures the node in which Label Switched Path (LSP) signaling is initiated.

Mid-point

Specifies the transit node in which LSP signaling is processed (for example, not a source or receiver).

Receiver, Leaf, and Destination

Specifies the node in which LSP signaling ends.

Branch Point

Specifies the node in which packet replication is performed.

Source-to-Leaf (S2L) SubLSP

Specifies the P2MP-TE LSP segment that runs from the source to one leaf.

Point-to-Multipoint Traffic-Engineering Features

P2MP RSVP-TE (RFC 4875) is supported. RFC 4875 is based on
nonaggregate signaling; for example, per S2L signaling. Only P2MP LSP is
supported.

Point-to-Multipoint RSVP-TE

RSVP-TE signals a P2MP tunnel base that is based on a manual configuration. If all Source-to-Leaf (S2L)s use an explicit path, the P2MP tunnel creates a static tree that
follows a predefined path based on a constraint such as a deterministic Label
Switched Path (LSP). If the S2L uses a dynamic path, RSVP-TE creates a P2MP tunnel base on the best path in the RSVP-TE topology. RSVP-TE supports bandwidth reservation for constraint-based routing.

When an explicit path option is used, specify both the local and peer IP addresses in the explicit path option, provided the link is a GigabitEthernet or a TenGigE based interface. For point-to-point links like POS or bundle POS, it is sufficient to mention the remote or peer IP address in the explicit path option.

RSVP-TE distributes stream information in which the
topology tree does not change often (where the source and receivers are). For
example, large scale video distribution between major sites is suitable for a
subset of multicast applications. Because multicast traffic is already in the
tunnel, the RSVP-TE tree is protected as long as you build a backup path.

Fast-Reroute (FRR) capability is supported for P2MP RSVP-TE by using the unicast link protection. You can choose the type of traffic to go to the backup link.

The P2MP tunnel is applicable for all TE Tunnel destination (IntraArea, InterArea or InterAS).

The P2MP tunnel is signaled by the dynamic and explicit path option in the IGP intra area. Only interArea and interAS, which are used for the P2MP tunnels, are signaled by the verbatim path option.

Point-to-Multipoint Fast Reroute

MPLS-TE Fast Reroute (FRR) is a mechanism to minimize interruption in traffic delivery to a TE Label Switched Path (LSP) destination as a result of link or node failures. FRR enables temporarily fast switching of LSP traffic along an alternative backup path around a network failure, until the TE tunnel source signals a new end-to-end LSP.

The Point-of-Local Repair (PLR) is a node that selects a backup tunnel and switches the LSP traffic onto the backup tunnel in case a failure is detected. The receiver of the backup tunnel is referred to as the Merge Point (MP).

Both Point-to-Point (P2P) and P2MP-TE support only the Facility FRR method from RFC 4090.

Fast reroutable LSPs can coexist with fast reroutable P2P LSPs in a network. Node, link, and bandwidth protection for P2P LSPs are supported. Both MPLS-TE link and node protection rely on the fact that labels for all primary LSPs and subLSPs are using the MPLS global label allocation. For example, one single (global) label space is used for all MPLS-TE enabled physical interfaces on a given MPLS node.

Point-to-Multipoint Label Switch Path

The Point-to-Multipoint Label Switch Path (P2MP LSP) has only a single
root, which is the Ingress Label Switch Router (LSR). The P2MP LSP is created
based on a receiver that is connected to the Egress LSR. The Egress LSR
initiates the creation of the tree (for example, tunnel grafting or pruning is done by
performing an individual sub-LSP operation) by creating the Forwarding
Equivalency Class (FEC) and Opaque Value.

Note

Grafting and pruning operate on a per destination basis.

The Opaque Value contains the stream information that uniquely
identifies the tree to the root. To receive label switched multicast packets,
the Egress Provider Edge (PE) indicates to the upstream router (the next hop
closest to the root) which label it uses for the multicast source by applying the
label mapping message.

The upstream router does not need to have any knowledge of the source;
it needs only the received FEC to identify the correct P2MP LSP. If the
upstream router does not have any FEC state, it creates it and installs the
assigned downstream outgoing label into the label forwarding table. If the
upstream router is not the root of the tree, it must forward the label mapping
message to the next hop upstream. This process is repeated hop-by-hop until the
root is reached.

By using downstream allocation, the router that wants to receive the
multicast traffic assigns the label for it. The label request, which is
sent to the upstream router, is similar to an unsolicited label mapping (that is, the upstream does not request it). The upstream router that receives
that label mapping uses the specific label to send multicast packets
downstream to the receiver. The advantage is that the router, which allocates
the labels, does not get into a situation where it has the same label for two
different multicast sources. This is because it manages its own label space allocation
locally.

Path Option for Point-to-Multipoint RSVP-TE

P2MP tunnels are signaled by using the dynamic and explicit path-options in an IGP intra area.
InterArea and InterAS cases for P2MP tunnels are signaled by the verbatim path option.

Path options for P2MP tunnels are individually configured for each sub-LSP. Only one path option per sub-LSP (destination) is allowed. You can choose whether the corresponding sub-LSP is dynamically or explicitly routed. For the explicit option, you can configure the verbatim path option to bypass the topology database lookup and verification for the specified destination.

Both dynamic and explicit path options are supported on a per
destination basis by using the
path-option (P2MP-TE) command. In addition, you
can combine both path options.

Explicit Path Option

Configures the intermediate hops that are traversed by a sub-LSP going from the TE source to the egress MPLS node. Although an explicit path configuration enables granular control sub-LSP paths in an MPLS network, multiple explicit paths are configured for specific network topologies with a limited number of (equal cost) links or paths.

Dynamic Path Option

Computes the IGP path of a P2MP tree sub-LSP that is based on the OSPF and ISIS algorithm. The TE source is dynamically calculated based on the IGP topology.

Dynamic Path Calculation Requirements

Dynamic path calculation for each sub-LSP uses the same path parameters as those for the path calculation of regular point-to-point TE tunnels. As part of the sub-LSP path calculation, the link resource (bandwidth) is included, which is flooded throughout the MPLS network through the existing RSVP-TE extensions to OSPF and ISIS. Instead of dynamic calculated paths, explicit paths are also configured for one or more sub-LSPs that are associated with the P2MP-TE tunnel.

OSPF or ISIS are used for each destination.

TE topology and tunnel constraints are used to input the path calculation.

Tunnel constraints such as affinity, bandwidth, and priorities are used for all destinations in a tunnel.

Path calculation yields an explicit route to each destination.

Static Path Calculation Requirements

The static path calculation does not require any new extensions to IGP to advertise link availability.

MPLS Traffic Engineering Shared Risk Link Groups

Shared Risk Link Groups (SRLG) in MPLS traffic engineering refer to situations in which links in a network share a common fiber (or a common physical attribute). These links have a shared risk, and that is when one link fails, other links in the group might fail too.

OSPF and Intermediate System-to-Intermediate System (IS-IS) flood the SRLG value information (including other TE link attributes such as bandwidth availability and affinity) using a sub-type length value (sub-TLV), so that all routers in the network have the SRLG information for each link.

To activate the SRLG feature, configure the SRLG value of each link that has a shared risk with another link. A maximum of 30 SRLGs per interface is allowed. You can configure this feature on multiple interfaces including the bundle interface.

This feature is enabled through the explicit-path command that allows you to create an IP explicit path and enter a configuration submode for specifying the path. The feature adds to the submode commands of the exclude-address command for specifying addresses to exclude from the path.

The feature also adds to the submode commands of the exclude-srlg command that allows you to specify the IP address to get SRLGs to be excluded from the
explicit path.

If the excluded address or excluded srlg for an MPLS TE LSP identifies a flooded link, the constraint-based shortest path first (CSPF) routing algorithm does not consider that link when computing paths for the LSP. If the excluded address specifies a flooded MPLS TE router ID, the CSPF routing algorithm does not allow paths for the LSP to traverse the node identified by the router ID.

Fast ReRoute with SRLG Constraints

Fast ReRoute (FRR) protects MPLS TE Label Switch Paths (LSPs) from link and node failures by locally repairing the LSPs at the point of failure. This protection allows data to continue to flow on LSPs, while their headend routers attempt to establish new end-to-end LSPs to replace them. FRR locally repairs the protected LSPs by rerouting them over backup tunnels that bypass failed links or nodes.

Backup tunnels that bypass only a single link of the LSP's path provide Link Protection. They protect LSPs by specifying the protected link IP addresses to extract SRLG values that are to be excluded from the explicit path, thereby bypassing the failed link. These are referred to as next-hop (NHOP) backup tunnels because they terminate at the LSP's next hop beyond the point of failure. Figure 1 illustrates an NHOP backup tunnel.

Figure 8. NHOP Backup Tunnel with SRLG constraint

In the topology shown in the above figure, the backup tunnel path computation can be performed in this manner:

Get all SRLG values from the exclude-SRLG link (SRLG values 5 and 6)

Mark all the links with the same SRLG value to be excluded from SPF

Path computation as CSPF R2->R6->R7->R3

FRR provides Node Protection for LSPs. Backup tunnels that bypass next-hop nodes along LSP paths are called NNHOP backup tunnels because they terminate at the node following the next-hop node of the LSP paths, thereby bypassing the next-hop node. They protect LSPs when a node along their path fails, by enabling the node upstream to the point of failure to reroute the LSPs and their traffic, around the failed node to the next-next hop. They also protect LSPs by specifying the protected link IP addresses that are to be excluded from the explicit path, and the SRLG values associated with the IP addresses excluded from the explicit path. NNHOP backup tunnels also provide protection from link failures by bypassing the failed link as well as the node. Figure 2 illustrates an NNHOP backup tunnel.

Figure 9. NNHOP Backup Tunnel with SRLG constraint

In the topology shown in the above figure, the backup tunnel path computation can be performed in this manner:

Multiple Backup Tunnels Protecting the Same Interface

Increased backup capacity—If the protected interface is a high-capacity link and no single backup path exists with an equal capacity, multiple backup tunnels can protect that one high-capacity link. The LSPs using this link falls over to different backup tunnels, allowing all of the LSPs to have adequate bandwidth protection during failure (rerouting). If bandwidth protection is not desired, the router spreads LSPs across all available backup tunnels (that is, there is load balancing across backup tunnels).

Auto-Tunnel Mesh

The MPLS traffic engineering auto-tunnel mesh (Auto-mesh) feature allows you to set up full mesh of TE P2P tunnels automatically with a minimal set of MPLS traffic engineering configurations. You may configure one or more mesh-groups. Each mesh-group requires a destination-list (IPv4 prefix-list) listing destinations, which are used as destinations for creating tunnels for that mesh-group.

You may configure MPLS TE auto-mesh type attribute-sets (templates) and associate them to mesh-groups. LSR creates tunnels using the tunnel properties defined in the attribute-set.

Auto-Tunnel mesh provides benefits:

Minimizes the initial configuration of the network.
You may configure tunnel properties template and mesh-groups or destination-lists on each TE LSRs that further creates full mesh of TE tunnels between those LSRs.

Minimizes future configurations resulting due to network growth.
It eliminates the need to reconfigure each existing TE LSR in order to establish a full mesh of TE tunnels whenever a new TE LSR is added in the network.

A prefix-list may be configured on each TE
router to match a desired set of router IDs (MPLS TE ID as shown in the
above output). For example, if a prefix-list is configured to
match addresses of 100.0.0.0 with wildcard 0.255.255.255, then all
100.x.x.x router IDs are included in the auto-mesh
group.

When a new TE router is added in the network
and its router ID is also in the block of addresses described by
the prefix-list, for example, 100.x.x.x, then it is added in the
auto-mesh group on each existing TE router without having to
explicitly modify the prefix-list or perform any additional
configuration.

Auto-mesh does not create tunnels to its own
(local) TE router IDs.

Note

When prefix-list configurations on
all routers are not identical, it can result in non- symmetrical
mesh of tunnels between those routers.

Building MPLS-TE Topology

Before you start to build the MPLS-TE topology, you must have enabled:

IGP such as OSPF or IS-IS for MPLS-TE.

MPLS Label Distribution Protocol (LDP).

RSVP on the port interface.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID, the system
defaults to the global router ID. Default router IDs are subject to change,
which can result in an unstable link.

If you are going to use nondefault holdtime or intervals, you must
decide the values to which they are set.

Creating an MPLS-TE Tunnel

Creating an MPLS-TE tunnel is a process of customizing the traffic
engineering to fit your network topology.

Perform this task to create an MPLS-TE tunnel after you have built the
traffic engineering topology.

Before You Begin

The following prerequisites are required to create an MPLS-TE tunnel:

You must have a router ID for the neighboring router.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID to the routers,
the system defaults to the global router ID. Default router IDs are subject to
change, which can result in an unstable link.

If you are going to use nondefault holdtime or intervals, you must
decide the values to which they are set.

Configuring Forwarding over the MPLS-TE Tunnel

Perform this task to configure forwarding over the MPLS-TE tunnel
created in the previous task . This
task allows MPLS packets to be forwarded on the link between network
neighbors.

Before You Begin

The following prerequisites are required to configure forwarding over
the MPLS-TE tunnel:

You must have a router ID for the neighboring router.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID to the routers,
the system defaults to the global router ID. Default router IDs are subject to
change, which can result in an unstable link.

Protecting MPLS Tunnels with Fast Reroute

Perform this task to protect MPLS-TE tunnels, as created in the
previous task.

Note

Although this task is similar to the previous task, its importance
makes it necessary to present as part of the tasks required for traffic
engineering on
Cisco IOS XR software.

Before You Begin

The following prerequisites are required to protect MPLS-TE tunnels:

You must have a router ID for the neighboring router.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID to the routers,
the system defaults to the global router ID. Default router IDs are subject to
change, which can result in an unstable link.

Enabling an AutoTunnel Backup

Perform this task to configure the AutoTunnel Backup feature. By default, this feature is disabled. You can configure the AutoTunnel Backup feature for each interface. It has to be explicitly enabled for each interface or link.

Configuring a Prestandard DS-TE Tunnel

The following prerequisites are required to configure a Prestandard
DS-TE tunnel:

You must have a router ID for the neighboring router.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID to the routers,
the system defaults to the global router ID. Default router IDs are subject to
change, which can result in an unstable link.

Configuring an IETF DS-TE Tunnel Using RDM

The following prerequisites are required to create an IETF mode DS-TE
tunnel using RDM:

You must have a router ID for the neighboring router.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID to the routers,
the system defaults to the global router ID. Default router IDs are subject to
change, which can result in an unstable link.

Configuring an IETF DS-TE Tunnel Using MAM

The following prerequisites are required to configure an IETF mode
differentiated services traffic engineering tunnel using the MAM bandwidth
constraint model:

You must have a router ID for the neighboring router.

Stable router ID is required at either end of the link to ensure
that the link is successful. If you do not assign a router ID to the routers,
the system defaults to the global router ID. Default router IDs are subject to
change, which can result in an unstable link.

Configuring MPLS -TE and Fast-Reroute on OSPF

Perform this task to configure MPLS-TE and Fast Reroute (FRR) on OSPF.

Before You Begin

Note

Only point-to-point (P2P) interfaces are supported for OSPF multiple
adjacencies. These may be either native P2P interfaces or broadcast interfaces
on which the OSPF P2P configuration command is applied to force them to behave
as P2P interfaces as far as OSPF is concerned. This restriction does not apply
to IS-IS.

Network mask can be a four-part dotted decimal address.
For example, 255.0.0.0 indicates that each bit equal to 1 means that the
corresponding address bit belongs to the network address.

Network mask can be indicated as a slash (/) and a number
(prefix length). The prefix length is a decimal value that indicates how many
of the high-order contiguous bits of the address compose the prefix (the
network portion of the address). A slash must precede the decimal value, and
there is no space between the IP address and the slash.

Specifies that the TE router identifier for the node is the IP
address that is associated with a given interface. The router ID is specified
with an interface name or an IP address. By default, MPLS uses the global
router ID.

Step 7

Use the
commit or
end command.

commit—Saves the configuration changes and remains
within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Configuring OSPF over IPCC

Perform this task to configure OSPF over IPCC on both the headend and
tailend routers. The IGP interface ID is configured for control network,
specifically for the signaling plane in the optical domain.

Note

IPCC support is restricted to routed, out-of-fiber, and out-of-band.

SUMMARY STEPS

1.configure

2.router ospfprocess-name

3.areaarea-id

4.interfacetype interface-path-id

5.exit

6.exit

7.mpls traffic-eng router-id {type interface-path-id |
ip-address}

8.areaarea-id

9. Use the
commit or
end command.

DETAILED STEPS

Command or Action

Purpose

Step 1

configure

Example:

RP/0/RP0/CPU0:router# configure

Enters
global configuration mode.

Step 2

router ospfprocess-name

Example:

RP/0/RP0/CPU0:router(config)# router ospf 1

Configures OSPF routing and assigns a process name.

Step 3

areaarea-id

Example:

RP/0/RP0/CPU0:router(config-ospf)# area 0

Configures an area ID for the OSPF process (either as a decimal
value or IP address):

Backbone areas have an area ID of 0.

Non-backbone areas have a nonzero area ID.

Step 4

interfacetype interface-path-id

Example:

RP/0/RP0/CPU0:router(config-ospf-ar)# interface Loopback0

Enables IGP on the interface. This command is used to configure
any interface included in the control network.

Network mask is a four-part dotted decimal address. For
example, 255.0.0.0 indicates that each bit equal to 1 means that the
corresponding address bit belongs to the network address.

Network mask is indicated as a slash (/) and a number
(prefix length). The prefix length is a decimal value that indicates how many
of the high-order contiguous bits of the address compose the prefix (the
network portion of the address). A slash must precede the decimal value, and
there is no space between the IP address and the slash.

or

Enables IPv4 processing on a point-to-point interface without
assigning an explicit IPv4 address to that interface.

Note

If you configured a unnumbered GigabitEthernet interface in Step
2 and selected the ipv4 unnumbered interface command type option in this step,
you must enter the
ipv4 point-to-point command to
configure point-to-point interface mode.

Step 4

Use the
commit or
end command.

commit—Saves the configuration changes, and
remains within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Configuring Local Reservable Bandwidth

Perform this task to configure the local reservable bandwidth for the
data bearer channels.

Configuring an Optical TE Tunnel Using Dynamic Path Option

Perform this task to configure a numbered or unnumbered optical tunnel
on a router; in this example, the dynamic path option on the headend router.
The dynamic option does not require that you specify the different hops to be
taken along the way. The hops are calculated automatically.

Note

The examples describe how to configure optical tunnels. It does not
include procedures for every option available on the headend and tailend
routers.

Network mask can be a four-part dotted decimal address.
For example, 255.0.0.0 indicates that each bit equal to 1 means that the
corresponding address bit belongs to the network address.

Network mask can be indicated as a slash (/) and a number
(prefix length). The prefix length is a decimal value that indicates how many
of the high-order contiguous bits of the address compose the prefix (the
network portion of the address). A slash must precede the decimal value, and
there is no space between the IP address and the slash.

or

Enables IPv4 processing on a point-to-point interface without
assigning an explicit IPv4 address to that interface.

Network mask can be a four-part dotted decimal address.
For example, 255.0.0.0 indicates that each bit equal to 1 means that the
corresponding address bit belongs to the network address.

Network mask can be indicated as a slash (/) and a number
(prefix length). The prefix length is a decimal value that indicates how many
of the high-order contiguous bits of the address compose the prefix (the
network portion of the address). A slash must precede the decimal value, and
there is no space between the IP address and the slash.

or

Enables IPv4 processing on a point-to-point interface without
assigning an explicit IPv4 address to that interface.

Step 4

passive

Example:

RP/0/RP0/CPU0:router(config-if)# passive

Configures a passive interface.

Note

The tailend (passive) router does not signal the tunnel, it
simply accepts a connection from the headend router. The tailend router
supports the same configuration as the headend router.

Step 5

match identifiertunnel number

Example:

RP/0/RP0/CPU0:router(config-if)# match identifier gmpls1_t1

Configures the match identifier. You must enter the hostname for
the head router then underscore _t, and the tunnel number for the head router.
If tunnel-te1 is configured on the head router with a hostname of gmpls1, CLI
is match identifier gmpls1_t1.

Note

The match identifier must correspond to the tunnel-gte number
configured on the headend router. Together with the address specified using the
destination command, this identifier uniquely identifies acceptable
incoming tunnel requests.

Step 6

destinationip-address

Example:

RP/0/RP0/CPU0:router(config-if)# destination 10.1.1.1

Assigns a destination address on the new tunnel.

Destination address is the remote node’s MPLS-TE router
ID.

Destination address is the merge point between backup and
protected tunnels.

Step 7

Use the
commit or
end command.

commit—Saves the configuration changes, and
remains within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Configuring LSP Hierarchy

These tasks describe the high-level steps that are required to configure LSP hierarchy.

LSP hierarchy allows standard MPLS-TE tunnels to be established over GMPLS-TE tunnels.

Before you can successfully configure LSP hierarchy, you must first establish a numbered optical tunnel between the headend and tailend routers.

To configure LSP hierarchy, you must perform a series of tasks that have been previously described in this GMPLS configuration section. The tasks, which must be completed in the order presented, are as follows:

Configuring Border Control Model

Border control model lets you specify the optical core tunnels to be advertised to edge packet topologies. Using this model, the entire topology is stored in a separate packet instance, allowing packet networks where these optical tunnels are advertised to use LSP hierarchy to signal an MPLS tunnel over the optical tunnel.

Consider the following information when configuring protection and restoration:

GMPLS optical TE tunnel must be numbered and have a valid IPv4 address.

Router ID, which is used for the IGP area and interface ID, must be consistent in all areas.

OSPF interface ID may be a numeric or alphanumeric.

Note

Border control model functionality is provided for multiple IGP instances in one area or in multiple IGP areas.

To configure border control model functionality, you will perform a series of tasks that have been previously described in this GMPLS configuration section. The tasks, which must be completed in the order presented, are as follows:

Configure two optical tunnels on different interfaces.

Note

When configuring IGP, you must keep the optical and packet topology information in separate routing tables.

Configure OSPF adjacency on each tunnel.

Configure bandwidth on each tunnel.

Configure packet tunnels.

Configuring Path Protection

Configuring an LSP

Perform this task to configure an LSP for an explicit path. Path
protection is enabled on a tunnel by adding an additional path option
configuration at the active end. The path can be configured either explicitly
or dynamically.

Note

When the dynamic option is used for both working and protecting
LSPs, CSPF extensions are used to determine paths with different degrees of
diversity. When the paths are computed, they are used over the lifetime of the
LSPs. The nodes on the path of the LSP determine if the PSR is or is not for a
given LSP. This determination is based on information that is obtained at
signaling.

Network mask can be a four-part dotted decimal address.
For example, 255.0.0.0 indicates that each bit equal to 1 means that the
corresponding address bit belongs to the network address.

Network mask can be indicated as a slash (/) and a number
(prefix length). The prefix length is a decimal value that indicates how many
of the high-order contiguous bits of the address compose the prefix (the
network portion of the address). A slash must precede the decimal value, and
there is no space between the IP address and the slash.

or

Enables IPv4 processing on a point-to-point interface without
assigning an explicit IPv4 address to that interface.

Step 4

signalled-namename

Example:

RP/0/RP0/CPU0:router(config-if)# signalled-name tunnel-gte1

Configures the name of the tunnel required for an MPLS TE tunnel.
The
name argument specifies the signal for the tunnel.

Enters an affinity name and a map value by using a color name
(repeat this command to assign multiple colors up to a maximum of 64 colors).
An affinity color name cannot exceed 64 characters. The value you assign to a
color name must be a single digit.

Step 4

Use the
commit or
end command.

commit—Saves the configuration changes and remains
within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Associating Affinity-Names with TE Links

The next step in the configuration of MPLS-TE Flexible Name-based
Tunnel Constraints is to assign affinity names and values to TE links. You can
assign up to a maximum of 32 colors. Before you assign a color to a link, you
must define the name-to-value mapping for each color.

Configures link attributes for links comprising a tunnel. You can
have up to ten colors.

Multiple include statements can be specified under tunnel
configuration. With this configuration, a link is eligible for CSPF if it has
at least a red color or has at least a green color. Thus, a link with red and
any other colors as well as a link with green and any additional colors meet
the above constraint.

Step 4

Use the
commit or
end command.

commit—Saves the configuration changes, and
remains within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Configuring Unequal Load Balancing

Setting Unequal Load Balancing Parameters

The first step you must take to configure unequal load balancing
requires that you set the parameters on each specific interface. The default
load share for tunnels with no explicit configuration is the configured
bandwidth.

Note

Equal load-sharing occurs if there is no configured bandwidth.

SUMMARY STEPS

1.configure

2.interface tunnel-tetunnel-id

3.load-sharevalue

4. Use the
commit or
end command.

5.show mpls traffic-eng tunnels

DETAILED STEPS

Command or Action

Purpose

Step 1

configure

Example:

RP/0/RP0/CPU0:router# configure

Enters
global configuration mode.

Step 2

interface tunnel-tetunnel-id

Example:

RP/0/RP0/CPU0:router(config)# interface tunnel-te 1

Configures an MPLS-TE tunnel interface configuration mode and
enables traffic engineering on a particular interface on the originating node.

Note

Only tunnel-te interfaces are permitted.

Step 3

load-sharevalue

Example:

RP/0/RP0/CPU0:router(config-if)# load-share 1000

Configures the load-sharing parameters for the specified
interface.

Step 4

Use the
commit or
end command.

commit—Saves the configuration changes, and
remains within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Step 5

show mpls traffic-eng tunnels

Example:

RP/0/RP0/CPU0:router# show mpls traffic-eng tunnels

Verifies the state of unequal load balancing, including bandwidth
and load-share values.

Forcing the Current Application Period to Expire Immediately

Perform this task to force the current application period to expire
immediately on the specified tunnel. The highest bandwidth is applied on the
tunnel before waiting for the application period to end on its own.

SUMMARY STEPS

1.mpls traffic-eng auto-bw apply {all |
tunnel-tetunnel-number}

2. Use the
commit or
end command.

3.show mpls traffic-eng tunnels [auto-bw]

DETAILED STEPS

Command or Action

Purpose

Step 1

mpls traffic-eng auto-bw apply {all |
tunnel-tetunnel-number}

Example:

RP/0/RP0/CPU0:router# mpls traffic-eng auto-bw apply tunnel-te 1

Configures the highest bandwidth available on a tunnel without
waiting for the current application period to end.

all

Configures the highest bandwidth available instantly on all
the tunnels.

tunnel-te

Configures the highest bandwidth instantly to the specified
tunnel. Range is from 0 to 65535.

Step 2

Use the
commit or
end command.

commit—Saves the configuration changes and remains
within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Step 3

show mpls traffic-eng tunnels [auto-bw]

Example:

RP/0/RP0/CPU0:router# show mpls traffic-eng tunnels auto-bw

Displays information about MPLS-TE tunnels for the automatic
bandwidth.

Configures the tunnel bandwidth change threshold to trigger an
adjustment.

percentage

Bandwidth change percent threshold to trigger an adjustment
if the largest sample percentage is higher or lower than the current tunnel
bandwidth. Range is from 1 to 100 percent. The default value is 5 percent.

min

Configures the bandwidth change value to trigger an
adjustment. The tunnel bandwidth is changed only if the largest sample is
higher or lower than the current tunnel bandwidth. Range is from 10 to
4294967295 kilobits per second (kbps). The default value is 10 kbps.

Bandwidth change percent to trigger an overflow. Range is
from 1 to 100 percent.

limit

Configures the number of consecutive collection intervals
that exceeds the threshold. The bandwidth overflow triggers an early tunnel
bandwidth update. Range is from 1 to 10 collection periods. The default value
is none.

min

Configures the bandwidth change value in kbps to trigger an
overflow. Range is from 10 to 4294967295. The default value is 10.

Step 8

Use the
commit or
end command.

commit—Saves the configuration changes and remains
within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Step 9

show mpls traffic-eng tunnels [auto-bw]

Example:

RP/0/RP0/CPU0:router# show mpls traffic-eng tunnels auto-bw

Displays the MPLS-TE tunnel information only for tunnels in which
the automatic bandwidth is enabled.

Configures an auto-tunnel mesh group of interfaces in LDP. You can enable LDP on all TE meshgroup interfaces or you can specify the TE mesh group ID on which the LDP is enabled. The range of group ID is from 0 to 4294967295.

Step 5

Use the
commit or
end command.

commit—Saves the configuration changes and remains
within the configuration session.

end—Prompts user to take one of these actions:

Yes— Saves configuration changes and exits the
configuration session.

No—Exits the configuration session without
committing the configuration changes.

Cancel—Remains in the configuration mode, without
committing the configuration changes.

Configure Fast Reroute and SONET APS: Example

When SONET Automatic Protection Switching (APS) is configured on a
router, it does not offer protection for tunnels; because of this limitation,
fast reroute (FRR) still remains the protection mechanism for MPLS-TE.

When APS is configured in a SONET core network, an alarm might be
generated toward a router downstream. If this router is configured with FRR,
the hold-off timer must be configured at the SONET level to prevent FRR from
being triggered while the core network is performing a restoration. Enter the
following commands to configure the delay:

Configure the Ignore IS-IS Overload Bit Setting in MPLS-TE: Example

This example shows how to configure the IS-IS overload bit setting in MPLS-TE:

This figure illustrates the IS-IS overload bit scenario:

Figure 10. IS-IS overload bit

Consider a MPLS TE topology in which usage of nodes that indicated an overload situation was restricted. In this topology, the router R7 exhibits overload situation and hence this node can not be used during TE CSPF. To overcome this limitation, the IS-IS overload bit avoidance (OLA) feature was introduced. This feature allows network administrators to prevent RSVP-TE label switched paths (LSPs) from being disabled when a router in that path has its Intermediate System-to-Intermediate System (IS-IS) overload bit set.

The IS-IS overload bit avoidance feature is activated at router R1 using this command:

Configure an Interarea Tunnel: Example

The following configuration example shows how to configure a traffic engineering interarea tunnel. Router R1 is the headend for tunnel1, and router R2 (20.0.0.20) is the tailend. Tunnel1 is configured with a path option that is loosely routed through Ra and Rb.

Configure Point-to-Multipoint for the Source: Example

At the source, multicast routing must be enabled on both the
tunnel-mte interface and customer-facing interface. Then, the static-group must
be configured on the tunnel-mte interface to forward specified multicast
traffic over P2MP LSP.

Note

The multicast group address, which is in Source-Specific Multicast
(SSM) address range (ff35::/16), must be used on the static-group configuration
because
Cisco IOS XR software supports
only SSM for Label Switch Multicast (LSM). Additionally, the customer-facing
interface must have an IPv6 address.

Technical Assistance

Description

Link

The Cisco Technical Support website contains thousands of
pages of searchable technical content, including links to products,
technologies, solutions, technical tips, and tools. Registered Cisco.com users
can log in from this page to access even more content.