Traffic engineering the service provider network

Traffic engineering distributes bandwidth load across network links. Learn about the evolution of traffic engineering and its role in networks transitioning from Layer 2 to IP technology. Then dive into MPLS traffic engineering and all the benefits it provides for network engineers and designers, as well as MPLS TE myths and half-truths.

Traffic engineering distributes bandwidth load across network links. Learn about the evolution of traffic engineering and its role in networks transitioning from Layer 2 to IP technology. Then dive into MPLS traffic engineering and all the benefits it provides for network engineers and designers, as well as MPLS TE myths and half-truths.

Download this free guide

Download Now: Virtualizing Network Functions Handbook

NFV promises both network savings and streamlining, but first you need to understand the technology and how to procure the configuration that works best for your network.

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Traffic engineering's role in next-generation networks

Traditional service provider networks provided Layer 2 point-to-point virtual circuits with contractually predefined bandwidth. Regardless of the technology used to implement the service (X.25, Frame Relay or ATM), the traffic engineering (optimal distribution of load across all available network links) was inherent in the process.

In most cases, the calculation of the optimum routing of virtual circuits was done off-line by a network management platform; advanced networks (offering Frame Relay or ATM switched virtual circuits) also offered real-time on-demand establishment of virtual circuits. However, the process was always the same:

The free network capacity was examined.

The end-to-end hop-by-hop path throughout the network that satisfied the contractual requirements (and, if needed, met other criteria) was computed.

The traffic contract specifies ingress and egress bandwidth for each site, not site-to-site traffic requirements.

Every IP packet is routed through the network independently, and every router in the path makes independent next-hop decisions.

Once merged, all packets toward the same destination take the same path (whereas multiple virtual circuits toward the same site could traverse different links).

Simplified to the extreme, the two paradigms could be expressed as follows:

Layer 2 switched networks assume that the bandwidth is expensive and try to optimize its usage, resulting in complex circuit setup mechanisms and expensive switching methods.

IP networks assume that the bandwidth is "free" and focus on low-cost, high-speed switching of a high volume of traffic.

The significant difference between the cost-per-switched-megabit of Layer 2 network (for example, ATM) and routed (IP) network has forced nearly all service providers to build next-generation networks exclusively on IP. Even in modern fiber-optics networks, however, bandwidth is not totally free, and there are always scenarios where you could use free resources of an underutilized link to ease the pressure on an overloaded path. Effectively, you would need traffic engineering capabilities in routed IP networks, but they are simply not available in the traditional hop-by-hop, destination-only routing model that most IP networks use.

Various approaches (including creative designs, as well as new technologies) have been tried to bring the traffic engineering capabilities to IP-based networks. We can group them roughly into these categories:

The network core uses Layer 2 switched technology (ATM or Frame Relay) that has inherent traffic engineering capabilities. Virtual circuits are then established between edge routers as needed.

IP routing tricks are used to modify the operation of IP routing protocols, resulting in adjustments to the path the packets are taking through the network.

The Layer 2 network core design was used extensively when the service providers were introducing IP as an additional service into their WAN networks. Many large service providers have already dropped this approach because it does not result in the cost reduction or increase in switching speed that pure IP-based networks bring.

The IP routing tricks try to shift the traffic load to underutilized links by artificially lowering their cost, thus making them look more attractive to routing protocols like OSPF or IS-IS. Fine-tuning the link costs in a complex network to achieve good traffic distribution is almost impossible, so this approach works only in niche situations. Significantly better results can be achieved with Border Gateway Protocol (BGP) thanks to a rich set of attributes it can carry with every IP route. Note that BGP was originally designed to support various routing policies, so you could implement rudimentary traffic engineering as yet another routing policy.

Virtual circuits implemented with IP-over-IP tunnels (using a variety of technologies) are approximately as complex as routing protocol cost-tuning and so are better avoided (although they could still represent a valuable temporary fix). MPLS traffic engineering (MPLS TE), on the other hand, is a complete implementation of traffic engineering technology rivaling the features available in advanced ATM or Frame Relay networks. For example:

The MPLS TE network tracks available resources on each link using extensions to IP routing protocols (only OSPF and IS-IS are supported, as MPLS TE needs full visibility of network topology, which is not available with any other routing protocol).

Whenever a new tunnel (the MPLS TE terminology for virtual circuit) needs to be established, the head-end router computes the end-to-end path through the network based on the reported state of available resources.

The tunnel establishment request is signaled hop-by-hop from the tunnel head-end to the tunnel tail router, reserving resources on every hop.

After the tunnel is established, the new path is seamlessly integrated with the routing protocols running in the network.

The support for MPLS TE is available in high-end and midlevel routers from multiple vendors. It's therefore highly advisable that you consider the requirements of MPLS TE (OSPF or IS-IS, for example) in your network design. If you implement the basic infrastructure needed by MPLS TE during the network deployment, you'll have it ready to use when you need to shift the traffic to cope with unexpected increases in bandwidth usage or delayed deployment of higher-speed links.

MPLS traffic engineering essentials

MPLS (Multi-protocol Label Switching) is the end result of the efforts to integrate Layer 3 switching, better known as routing, with Layer 2 WAN backbones, primarily ATM.. Even though the IP+ATM paradigm is mostly gone today because of the drastic shift to IP-only networks in the last few years, MPLS retains a number of useful features from Layer 2 technologies. One of the most notable is the ability to send packets across the network through a virtual circuit called Label Switched Path, or LSP, in MPLS terminology.

NOTE: While the Layer 2 virtual circuits are almost always bidirectional (although the traffic contracts in each direction can be different), the LSPs are always unidirectional. If you need bidirectional connectivity between a pair of routers, you have to establish two LSPs.

The LSPs in MPLS networks are usually established based on the contents of IP routing tables in core routers. However, there is nothing that would prevent LSPs being established and used through other means, provided that:

All the routers along the path agree on a common signaling protocol.

The router where the LSP starts (head-end router) and the router where the LSP ends (tail-end router) agree on what's traveling across the LSP.

NOTE: The other routers along the LSP do not inspect the packets traversing the LSP and are thus oblivious to their content; they just need to understand the signaling protocol that is used to establish the LSP.

With the necessary infrastructure in place, it was only a matter of time before someone would get the idea to use LSPs to implement MPLS-based traffic engineering -- and the first implementation in Cisco IOS closely followed the introduction of base MPLS (which at that time was called tag switching). The MPLS traffic engineering technology has evolved and matured significantly since then, but the concepts have not changed much since its introduction:

The head-end router computes the best hop-by-hop path across the network, based on resource availability advertised by other routers. Extensions to link-state routing protocols (OSPF or IS-IS) are used to advertise resource availability.

NOTE: The first MPLS TE implementations supported only static hop-by-hop definitions. These can still be used in situations where you need a very tight hop-by-hop control over the path the MPLS TE LSP will take or in networks using a routing protocol that does not have MPLS TE extensions.

The head-end router requests LSP establishment using a dedicated signaling protocol. As is often the case, two protocols were designed to provide the same functionality, with Cisco and Juniper implementing RSVP-TE (RSVP extensions for traffic engineering) and Nortel/Nokia favoring CR-LDP (constraint-based routing using label distribution protocol).

The routers along the path accept (or reject) the MPLS TE LSP establishment request and set up the necessary internal MPLS switching infrastructure.

When all the routers in the path accept the LSP signaling request, the MPLS TE LSP is operational.

The head-end router can use MPLS TE LSP to handle special data (initial implementations only supported static routing into MPLS traffic engineering tunnels) or seamlessly integrate the new path into the link-state routing protocol.

The tight integration of MPLS traffic engineering with the IP routing protocols provides an important advantage over the traditional Layer 2 WAN networks. In the Layer 2 backbones, the operator had to establish all the virtual circuits across the backbone (using a network management platform or by configuring switched virtual circuits on edge devices), whereas the MPLS TE can automatically augment and enhance the mesh of LSPs already established based on network topology discovered by IP routing protocols. You can thus use MPLS traffic engineering as a short-term measure to relieve the temporary network congestion or as a network core optimization tool without involving the edge routers.

In recent years, MPLS traffic engineering technology (and its implementation) has grown well beyond features offered by traditional WAN networks. For example:

Re-optimization allows the head-end routers to utilize resources that became available after the LSP was established.

Make-before-break signaling enables the head-end router to provision the optimized LSP before tearing down the already established LSP.

NOTE: Thanks to RSVP-TE functionality, the reservations on the path segments common to old and new LSP are not counted twice.

Automatic bandwidth adjustments measure the actual traffic sent across an MPLS TE LSP and adjust its reservations to match the actual usage.

10 MPLS traffic engineering myths and half truths

As with any complex technology, network engineers, designers and consultants tend to misunderstand some nuances of MPLS traffic engineering, resulting in myths and half-truths that are propagated throughout the industry. Here I will address some of the more common ones. The analysis is based on MPLS TE technology as described in various Internet Engineering Task Force (IETF) documents, as well as the current implementation available in Cisco IOS releases 12.4T and 12.2S.

1. Myth: MPLS TE is a quality-of-service feature. While MPLS TE can be used to shift traffic from overloaded paths to alternate paths with free bandwidth, it contains no inherent quality-of-service (QoS) features like guaranteed bandwidth, policing or shaping. The quality-of-service features have to be designed and deployed separately on top of the MPLS TE infrastructure. The deployment of MPLS TE in a network does not (by itself) improve the quality of its services.

2. Half-truth: MPLS TE improves network convergence. The MPLS Fast Reroute functionality provides a temporary fix to a link or node failure by shifting the MPLS TE-encapsulated traffic to a preconfigured bypass (no rerouting is provided for regular IP traffic). The convergence and subsequent network topology re-optimization is still performed by the IP routing protocols.

3. Myth: MPLS TE has to be deployed throughout the network. You can use MPLS TE in tactical situations, for example, between a pair of routers to shift the traffic away from a congested link or to provide a fast reroute protection of a critical link in your network.

4. Half-truth: MPLS TE can solve the network congestion issues. MPLS TE does not create new bandwidth; it only allows you to use the existing bandwidth more efficiently. You can use the MPLS TE tunnels to shift the traffic from the lowest-cost path computed by IP routing protocols to an alternate less-utilized path, temporarily relieving the congested link. But that action might cause the congestion of the alternate path, resulting in a domino effect throughout the network.

5. Myth: Bandwidth reserved by an MPLS TE tunnel will be available to the tunneled traffic. Although the MPLS TE technology uses extensions to the Resource Reservation Protocol (RSVP), which was originally designed to provide end-to-end QoS in IP networks, the MPLS TE RSVP reservations serve solely as an accounting mechanism in the MPLS TE module. This prevents link oversubscription by MPLS TE paths. MPLS TE reservations do not result in any QoS actions on the intermediate nodes. Lacking manual configuration on intermediate nodes, MPLS TE traffic is treated indistinguishably from the regular IP or MPLS traffic.

6. Myth: To use MPLS TE, you have to deploy MPLS in your network. MPLS TE can work without network-wide MPLS deployment. Traffic can be sent across MPLS TE tunnels without a label distribution protocol (LDP or TDP).Note: If you're running MPLS-based Virtual Private Networks (VPNs), you have to run LDP over an MPLS TE tunnel unless it terminates at the edge of your network on a Provider Edge (PE) router.

7. Half-truth: MPLS TE only works with OSPF and IS-IS routing protocols. MPLS TE paths can be configured manually (specifying all hops in the path) and independently of the IP routing protocol deployed in the network. However, if you want to have automatic path calculations and automatic rerouting of IP traffic onto MPLS TE paths, you have to use OSPF or IS-IS.

8. If you use the MPLS TE Fast Reroute, the quality of service will not degrade following a network failure. The MPLS TE Fast Reroute shifts MPLS TE tunnels established across a failed link or node onto preconfigured backup tunnels. The overall quality-of-service will not degrade only if:

These tunnels have adequate bandwidth;

There is enough free capacity on the backup paths

The quality-of-service mechanisms guarantee the bandwidth to the backup tunnels.

In all other cases, either the rerouted traffic or the traffic traversing the backup path prior to node or link failure will encounter degraded quality-of-service.

The MPLS TE path cannot be computed automatically. You have to manually specify at least the Area Border Routers (ABRs) the MPLS TE path crosses;

Automatic mapping of IP traffic onto MPLS TE paths (autoroute) is not available, as the router establishing the MPLS TE path does not know the exact topology of other OSPF areas.

Inter-area MPLS TE paths cannot be re-optimized after they have been established.

Note: The same is true for IS-IS. Truly dynamic MPLS TE tunnels can be established within a single IS-IS level, but they can cross the level boundary if you manually configure the transition points.

10. No longer true: You can't differentiate customer traffic based on Class-of-Service if you use MPLS TE The technology itself never had this limitation, but Cisco IOS did not support multiple parallel tunnels carrying different traffic classes for a long time.

About the AuthorIvan Pepelnjak, CCIE No. 1354, is a 25-year veteran of the networking industry. He has more than 10 years of experience in designing, installing, troubleshooting and operating large service provider and enterprise WAN and LAN networks and is currently chief technology advisor at NIL Data Communications, focusing on advanced IP-based networks and web technologies. His books include MPLS and VPN Architectures and EIGRP Network Design. You can read his blog here: http://ioshints.blogspot.com/index.html.

0 comments

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy