Multilink Point-to-Point Protocol

Multilink Point-to-Point Protocol (MLP) is used to combine multiple physical links into a single logical connection or MLP bundle (see Figure 22-1). Using MLP, you can increase bandwidth and more easily manage all of the circuits through a single interface. The MLP connection has a maximum bandwidth that is equal to the sum of the bandwidths of the component links. MLP also provides load balancing, multivendor interoperability, packet fragmentation and reassembly, and increased redundancy. The Cisco 10008 router implements the MLP specifications defined in RFC 1990.

MLP provides traffic load balancing over multiple wide-area network (WAN) links by sending packets and packet fragments over the links of bundle members. The multiple links come up in response to a defined load threshold. MLP mechanisms can calculate load on both inbound and outbound traffic, or on either direction as needed for the traffic between specific sites. MLP provides bandwidth on demand and reduces transmission latency across WAN links.

MLP allows packets to be fragmented and the fragments to be sent at the same time over multiple point-to-point links to the same remote address. Large nonreal-time packets are multilink encapsulated and fragmented into a small enough size to satisfy the delay requirements of real-time traffic. However, the smaller real-time packets are not multilink encapsulated. Instead, MLP interleaving provides a special transmit queue (priority queue) for these delay-sensitive packets to allow the packets to be sent earlier than other packet flows. Real-time packets remain intact and MLP interleaving mechanisms send the real-time packets between fragments of the larger nonreal-time packets. For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

MLP can provide increased redundancy by allowing traffic to flow over the remaining member links when a port fails. You can configure the member links on separate physical ports on the same line card or on different line cards. If a port becomes unavailable, MLP directs traffic over the remaining member links with minimal disruption to the traffic flow.

MLP mechanisms preserve packet ordering over an entire bundle, guaranteeing that network packets are processed at the receiving system in the same order that they are logically transmitted.

Valid multilink interface values for MLP over serial or multi-VC MLP over ATM are from 1 to 9999 (Release 12.2(28)SB and later), or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later). For example:

Feature History for Multilink PPP

The MLP over Serial feature was introduced on the Cisco 10000 series router.

PRE1

12.2(28)SB

The MLP over Serial, Single-VC MLP over ATM VCs, and Multi-VC MLP over ATM VCs features were introduced on the PRE2.

PRE2

12.2(31)SB2

Support was added for the PRE3 and the valid multilink interface ranges for MLP over serial or multi-VC MLP over ATM changed from 1 to 9999 (Release 12.2(28)SB and later) to from 1 to 9999 and 65,536 to 2,147,483,647.

PRE3

12.2(33)SB

The MLPPP on LNS feature was introduced on the Cisco 10000 series router that is supported on the PRE3 and PRE4. This feature is not supported on the PRE2.

PRE3 and PRE4

12.2(33)SB2

The MLPoE LAC Switching feature was introduced on the Cisco 10000 series router.

PRE3

12.2(33)XNE

The MLPoE at PTA feature was introduced on the Cisco 10000 series router.

PRE3 and PRE4

MLP Bundles

MLP combines multiple physical links into a logical bundle called an MLP bundle (see Figure 22-1). An MLP bundle is a single, virtual interface that connects to the peer system. Having a single virtual interface enables fancy queuing and QoS to be applied to the traffic on the virtual interface (for example, policing and traffic shaping can be applied to the traffic flows). Each individual link to the peer system might be doing some form of fancy queuing, but none of the links knows about the traffic on the other parallel links. Fancy queuing and QoS cannot be applied uniformly to the entire aggregate traffic between the system and its peer system. A single virtual interface also simplifies the task of monitoring traffic to the peer system (for example, traffic statistics are all on one interface).

Figure 22-1 Multilink PPP Bundle

An endpoint discriminator is used to identify the member links of the MLP bundle.

Restrictions for MLP Bundles

The router supports links equal to T1/E1 or less for MLPPP bundling. You cannot bundle high-speed links (for example, E3) because the router can store only 50 ms of data based on the E1 speed.

MLP Bundles and PPP Links

MLP works with fully functional Point-to-Point Protocol (PPP) interfaces. An MLP bundle can consist of a PPP over serial link and a PPP over ATM link. As long as each link behaves like a standard serial interface, the mixed links work properly in a bundle.

Adding the ppp multilink group command to a link's configuration does not make that link part of the specified bundle. This command only places a restriction on the link. If the link negotiates to use multilink, then it must provide the proper identification to join the bundle on the multilink interface or to activate a bundle on that interface. If the link provides identification that coincides with another active bundle in the system, or the link fails to match the identity of a bundle that is already active on the multilink group interface, the connection terminates.

A link joins an MLP bundle only if it negotiates to use multilink when the connection is established and the identification information exchanged matches that of an existing bundle. If a link supplies identification information that does not match any known bundle, MLP creates a new bundle for the user.

Cisco 10000 series routers do not support VAI bundle interfaces in a PTA configuration. VAI bundles are supported only on the L2TP network server (LNS) for MLPoLNS.MLP Groups

When you configure the ppp multilink group command on a link, the command applies a restriction to the link that indicates the link is not allowed to join any bundle other than the indicated group interface, and that the connection is to be terminated if the peer system attempts to join a different bundle.

A link actually joins a bundle when the identification keys for that link match the identification keys for an existing bundle (see the "How MLP Determines the Link a Bundle Joins" section). Configuring the ppp multilink group command on a link does not allow the link to bypass this process, unless a bundle does not already exist for this particular user. When matching links to bundles, the identification keys are always the determining factors.

Because the ppp multilink group command merely places a restriction on the link, any MLP-enabled link that is not assigned to a particular multilink group can join the dedicated bundle interface if it provides the correct identification keys for that dedicated bundle. Removing the ppp multilink group command from an active link that currently is a member of a multilink group does not make that link leave the bundle because the link is still a valid member. It is just no longer restricted to this one bundle.

How MLP Determines the Link a Bundle Joins

A link joins a bundle when the identification keys for that link match the identification keys for an existing bundle.

Two keys define the identity of a remote system: the PPP username and the MLP endpoint discriminator. The PPP authentication mechanisms (for example, PAP or CHAP) learn the PPP username. The endpoint discriminator is an option negotiated by the Link Control Protocol (LCP). Therefore, a bundle consists of all of the links that have the same PPP username and endpoint discriminator.

A link that does not provide a PPP username or endpoint discriminator is an anonymous link. MLP collects all of the anonymous links into a single bundle referred to as the anonymous bundle or default bundle. Typically, there can be only one anonymous bundle. Any anonymous links that negotiate MLP join (or create) the anonymous bundle.

When using multilink group interfaces, more than one anonymous peer is allowed. When you preassign a link to an MLP bundle by using the ppp multilink group command, and the link is anonymous, the link joins the bundle interface it is assigned to if the interface is not already active and associated with a nonanonymous user.

MLP determines the bundle a link joins in the following steps:

1. When a link connects, MLP creates a bundle name identifier for the link.

2. MLP then searches for a bundle with the same bundle name identifier.

–If a bundle with the same identifier exists, the link joins that bundle.

–If a bundle with the same identifier does not exist, MLP creates a new bundle with the same identifier as the link, and the link is the first link in the bundle.

Table 22-2 describes the commands and associated algorithm used to generate a bundle name. In the table, "username" typically means the authenticated username; however, an alternate name can be used instead. The alternate name is usually an expanded version of the username (for example, VPDN tunnels might include the network access server name) or a name derived from other sources.

Table 22-2 Bundle Name Generation

Command

Bundle Name Generation Algorithm

multilink bundle-name authenticated

The bundle name is the peer's username, if available.

If the peer does not provide a username, the algorithm uses the peer's endpoint discriminator.

Note The authenticated keyword specifies that the bundle name is based on whatever notion of a username the system can derive. The endpoint discriminator is ignored entirely, unless it is the only name that can be found.

The multilink bundle-name authenticated command is the default naming policy.

multilink bundle-name endpoint

The bundle name is the peer's endpoint discriminator.

If there is no endpoint discriminator, the algorithm uses the peer's username.

multilink bundle-name both

The name of the bundle is a concatenation of the username and the endpoint discriminator.

IP Addresses on MLP-Enabled Links

Configuring an IP address on a link used for MLP does not always work as expected. For example, consider the following configuration:

interface Serial 1/0/0

ip address 10.2.3.4 255.255.255.0

encapsulation ppp

ppp multilink

You might expect the following behavior as a result of this configuration:

•If the interface does not negotiate to use MLP and the interface comes up as a regular PPP link, then the interface negotiates the Internet Protocol Control Protocol (IPCP) and its local address is 10.2.3.4.

•If the interface did negotiate to use MLP, then the configured IP address is meaningless because the link is not visible to IP while it is part of a bundle. The bundle is a network-level interface and can have its own IP address, depending on the configuration used for the bundle.

Instead, if a link with an IP address configured comes up and joins a bundle, IP installs a route directly to that link interface and it might try to route packets directly to that link, bypassing the MLP bundle. This behavior occurs because IP considers an interface to be up for IP traffic whenever IP is configured on the interface and the interface is up. MLP intercepts and discards these misdirected frames. This condition occurs frequently if you use a virtual template interface to configure both the PPPoX member links and the bundle interface.

Using unnumbered IP interfaces enables you to work around IP problems and configure an IP address on an MLP-enabled link. The following example shows how to configure Multi-VC MLP over ATM using an unnumbered IP interface:

!

interface Multilink1

ip unnumbered Loopback0

peer default ip address pool mlpoa_pool

ppp chap hostname m1

ppp multilink

ppp multilink group 1

!

interface atm 2/0/0

no ip address

!

interface atm 2/0/0.1 point-to-point

pvc 0/32

ppp multilink group 1

vbr-nrt 128 64 20

encapsulation aal5mux ppp Virtual-Template1

!

!

interface atm 2/0/0.2 point-to-point

pvc 0/33

ppp multilink group 1

vbr-nrt 128 64 20

encapsulation aal5mux ppp Virtual-Template1

!

interface Virtual-Template1

no ip address

keepalive 30

ppp max-configure 110

ppp max-failure 100

ppp multilink

ppp timeout retry 5

!

ip local pool mlpoa_pool 100.1.1.1 100.1.7.254

!

Valid Ranges for MLP Interfaces

Table 22-3 lists the valid ranges you can specify when creating MLP interfaces using the interface multilink command.

Table 22-3 MLP Interface Ranges

Cisco IOS Release

PRE2 MLP Interface Ranges

PRE3 MLP Interface Ranges

Release 12.2(28)SB and later

1 to 9999

—

Release 12.2(31)SB2 and later

1 to 999965,536 to 2,147,483,647

1 to 999965,536 to 2,147,483,647

MLP Overhead

MLP encapsulation adds six extra bytes (4 header, 2 checksum) to each outbound packet. These overhead bytes reduce the effective bandwidth on the connection; therefore, the throughput for an MLP bundle is slightly less than an equivalent bandwidth connection that is not using MLP. If the average packet size is large, the extra MLP overhead is not readily apparent; however, if the average packet size is small, the extra overhead becomes more noticeable.

Using MLP fragmentation adds additional overhead to a packet. Each fragment contains six bytes of MLP header plus a link encapsulation header.

Configuration Commands for MLP

This section describes the following commands used to configure MLP and MLP-based link fragmentation and interleaving:

ppp multilink Command

To enable MLP on an interface, use the ppp multilink command in interface configuration mode. To disable MLP, use the no form of the command.

ppp multilink

no ppp multilink

Command History

Cisco IOS Release

Description

12.0(23)SX

The ppp multilink command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.

Defaults

The command is disabled.

Usage Guidelines

The ppp multilink command applies only to interfaces that use Point-to-Point Protocol (PPP) encapsulation.

When you use the ppp multilink command, the first channel negotiates the appropriate Network Control Protocol (NCP) layers (such as the IP Control Protocol and IPX Control Protocol), but subsequent links negotiate only the Link Control Protocol (LCP) and MLP.

ppp multilink fragment-delay Command

To specify a maximum size in units of time for packet fragments on a MLP bundle, use the ppp multilink fragment-delay command in interface configuration mode. To reset the maximum delay to the default value, use the no form of the command.

ppp multilink fragment-delay delay-max

noppp multilink fragment-delay delay-max

Syntax Description

delay-max

Specifies the maximum amount of time, in milliseconds, that is required to transmit a fragment. Valid values are from 1 to 1000 milliseconds.

Command History

Cisco IOS Release

Description

12.0(23)SX

The ppp multilink fragment-delay command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.

Defaults

If fragmentation is enabled, the fragment delay is 30 milliseconds.

Usage Guidelines

The ppp multilink fragment-delay command is useful when packets are interleaved and traffic characteristics such as delay, jitter, and load balancing must be tightly controlled.

MLP chooses a fragment size on the basis of the maximum delay allowed. If real-time traffic requires a certain maximum boundary on delay, using the ppp multilink fragment-delay command to set that maximum time can ensure that a real-time packet gets interleaved within the fragments of a large packet.

By default, MLP has no fragment size constraint, but the maximum number of fragments is constrained by the number of links. If interleaving is enabled, or if a fragment delay is explicitly configured with the ppp multilink fragment-delay command, then MLP uses a different fragmentation algorithm. In this mode, the number of fragments is unconstrained, but the size of each fragment is limited to the fragment-delay value, or 30 milliseconds if the fragment delay has not been configured.

The ppp multilink fragment-delay command is configured under the multilink interface. The value assigned to the delay-max argument is scaled by the speed at which a link can convert the time value into a byte value.

ppp multilink interleave Command

To enable interleaving of real-time packets among the fragments of larger nonreal-time packets on a MLP bundle, use the ppp multilink interleave command in interface configuration mode. To disable interleaving, use the no form of the command.

ppp multilink interleave

no ppp multilink interleave

Command History

Cisco IOS Release

Description

12.0(23)SX

The ppp multilink interleave command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.

Defaults

Interleaving is disabled.

Usage Guidelines

The ppp multilink interleave command applies to multilink interfaces, which are used to configure a bundle.

Interleaving works only when the queuing mode on the bundle is set to fair queuing.

If interleaving is enabled when fragment delay is not configured, the default delay is 30 milliseconds. The fragment size is derived from that delay, depending on the bandwidths of the links.

ppp multilink fragment disable Command

To disable packet fragmentation, use the ppp multilink fragment disable command in interface configuration mode. To enable fragmentation, use the no form of this command.

ppp multilink fragment disable

no ppp multilink fragment disable

Command History

Cisco IOS Release

Description

11.3

This command was introduced as ppp multilink fragmentation.

12.2

The no ppp multilink fragmentation command was changed to ppp multilink fragment disable. The no ppp multilink fragmentation command was recognized and accepted through Cisco IOS Release 12.2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.

Usage Guidelines

The ppp multilink fragment delay and ppp multilink interleave commands have precedence over the ppp multilink fragment disable command. Therefore, the ppp multilink fragment disable command has no effect if these commands are configured for a multilink interface and the following message displays:

Warning: 'ppp multilink fragment disable' or 'ppp multilink fragment maximum' will be
ignored, since multilink interleaving or fragment delay has been configured and have
higher precedence.

To completely disable fragmentation, you must do the following:

Router(config-if)# no ppp multilink fragment delay

Router(config-if)# no ppp multilink interleave

Router(config-if)# ppp multilink fragment disable

ppp multilink group Command

To restrict a physical link to joining only a designated multilink group interface, use the ppp multilink group command in interface configuration mode. To remove the restriction, use the no form of the command.

ppp multilink group group-number

noppp multilink group group-number

Syntax Description

group-number

Identifies the multilink group. This number must be identical to the multilink-bundle-number you assigned to the multilink interface. Valid values are:

•MLP over Serial—1 to 9999

•Single-VC MLP over ATM—10,000 and higher

•Multi-VC MLP over ATM—1 to 9999

Command History

Cisco IOS Release

Description

12.0

The multilink-group command was introduced on the Cisco 10000 series router.

12.2

This command was changed to ppp multilink group. The multilink-group command is accepted by the command line interpreter through Cisco IOS Release 12.2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.

Defaults

The command is disabled.

Usage Guidelines

By default the ppp multilink group command is disabled, which means the link can negotiate to join any bundle in the system.

When the ppp multilink group command is configured, the physical link is restricted from joining any but the designated multilink group interface. If a peer at the other end of the link tries to join a different bundle, the connection is severed. This restriction applies when MLP is negotiated between the local end and the peer system. The link can still come up as a regular PPP interface.

MLP over Serial Interfaces

The MLP over Serial interfaces feature enables you to bundle together T1 interfaces into a single logical connection called an MLP bundle (see the "MLP Bundles" section). MLP over Serial also provides the following functions:

•Load balancing—MLP provides bandwidth on demand and uses load balancing across all member links (up to 10) to transmit packets and packet fragments. MLP mechanisms calculate the load on either the inbound or outbound traffic between specific sites. Because MLP splits packets and fragments across all member links during transmission, MLP reduces transmission latency across WAN links.

•Increased redundancy—MLP allows traffic to flow over the remaining member links when a port fails. By configuring an MLP bundle that consists of T1 lines from more than one line card, if one line card stops operating, the part of the bundle on the other line cards continues to operate.

Link fragmentation and interleaving—The MLP fragmenting mechanism fragments large nonreal-time packets and sends the fragments at the same time over multiple point-to-point links to the same remote address. Smaller real-time packets remain intact. The MLP interleaving mechanism sends the real-time packets between the fragments of the nonreal-time packets, thus reducing real-time packet delay. For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide, at the following url:

Restrictions and Limitations for MLP over Serial Interfaces

•A multilink bundle can have up to 10 member links. The router supports both full T1 interfaces and fractional T1 interfaces as member links, but fractional T1 interfaces are supported only when LFI is enabled.

Note You can terminate the serial links on multiple line cards in the router chassis if all of the links are the same type, such as T1 or E1.

•The router supports a maximum of 1250 bundles per system and a maximum of 2500 member links per system.

•The valid multilink interface ranges are from 1 to 9999 (Release 12.2(28)SB and later) and from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later). For example:

Router(config)# interface multilink 8

•Interleaving is supported on all member links. MLP over Serial-based LFI must be enabled on an interface that has interleaving turned on.

•All member links in an MLP bundle must have the same encapsulation type and bandwidth.

•If a virtual template attached to a member link specifies a bandwidth, the router does not clone the specified bandwidth to the MLP bundle and the member links.

•You cannot manually configure the bandwidth on a bundle interface by using the bandwidth command.

•You cannot apply a virtual template with MLP configured to an MLP bundle.

Single-VC MLP over ATM Virtual Circuits

The Single-VC MLP over ATM virtual circuits (VCs) feature enhances the MLP over Serial interfaces feature by enabling you to configure multilink Point-to-Point Protocol (MLP) on an ATM VC. By doing so, you can aggregate multiple data paths (for example, PPP over ATM encapsulated ATM VCs) into a single logical connection called an MLP bundle (see the "MLP Bundles" section). The MLP bundle can have only one member link.

MLP supports link fragmentation and interleaving (LFI). When enabled, the MLP fragmentation mechanism multilink encapsulates large nonreal-time packets and fragments them into a small enough size to satisfy the delay requirements of real-time traffic. The smaller real-time packets remain intact and MLP sends the packets to a special transmit queue, allowing the packets to be sent earlier than other packet flows. The MLP interleaving mechanism sends the real-time packets between the fragments of the nonreal-time packets. For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Performance and Scalability for Single-VC MLP over ATM

•Configure the hold-queue command in interface configuration mode for all physical interfaces, except when configuring the OC-12 ATM line card. The 1-Port OC-12 ATM line card does not require the hold-queue command. For example:

Router(config-if)# hold-queue 4096in

•Configure the following commands and recommended values on the virtual template interface:

–ppp max-configure110

–ppp max-failure100

–ppp timeout retry5

–keepalive30

For example:

Router(config-if)# ppp max-configure 110

Router(config-if)# ppp max-failure 100

Router(config-if)# ppp timeout retry 5

Router(config-if)# keepalive 30

For more information, see the "Scalability and Performance" chapter in this guide.

Multi-VC MLP over ATM Virtual Circuits

The Multi-VC MLP over ATM virtual circuits (VCs) feature enhances the MLP over Serial interfaces feature by enabling you to configure multilink Point-to-Point Protocol (MLP) on multiple ATM VCs. By doing so, you can aggregate multiple data paths (for example, PPP over ATM encapsulated ATM VCs) into a single logical connection called an MLP bundle (see the "MLP Bundles" section). An MLP bundle can have up to 10 member links.

Multi-VC MLP over ATM provides the following functions:

•Load balancing—MLP provides bandwidth on demand and uses load balancing across all member links (up to 10) to transmit packets and packet fragments. The multiple links come up in response to a defined load threshold. MLP mechanisms calculate load on both inbound and outbound traffic, or on either direction as needed for traffic between specific sites. Because MLP uses all member links to transmit packets and fragments, MLP reduces transmission latency across WAN links.

•Increased redundancy—MLP allows traffic to flow over the remaining member links when a port fails. You can configure the member links on separate physical ports on the same line card or on different line cards. If a port becomes unavailable, MLP directs traffic over the remaining member links with minimal disruption to the traffic flow. MLP mechanisms preserve packet ordering over an entire bundle.

•Link fragmentation and interleaving—The MLP fragmentation mechanism fragments packets and sends the fragments at the same time over multiple point-to-point links to the same remote address. MLP multilink encapsulates large nonreal-time packets and fragments them into a small enough size to satisfy the delay requirements of real-time traffic. The smaller real-time packets remain intact and MLP sends the packets to a special transmit queue, allowing the packets to be sent earlier than other packet flows. The MLP interleaving mechanism sends the real-time packets between the fragments of the nonreal-time packets.

For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Performance and Scalability for Multi-VC MLP over ATM VCs

•Configure the hold-queue command in interface configuration mode for all physical interfaces, except when configuring the OC-12 ATM line card. The 1-Port OC-12 ATM line card does not require the hold-queue command. For example:

Router(config-if)# hold-queue 4096 in

•Configure the following commands and recommended values on the virtual template interface:

MLP on LNS

Networks are migrating from the digital subscriber line (DSL) aggregation network connectivity to broadband remote access server (BRAS), with the mix of Ethernet and ATM access networks. Therefore, there is an increasing need to support MLP, and, link fragmentation and interleaving (LFI), to allow high-priority, low-latency packets to be interleaved between fragments of lower-priority, higher-latency packets. Voice over IP (VoIP) is an example of a low-latency service.

In the Cisco 12.2(33)SB release, the MLP on LNS feature is introduced for asymmetric digital subscriber line (ADSL) deployments where the upstream bandwidth (BW) is low. The MLP on LNS feature can receive fragments from the customer premises equipment (CPE), ensuring that there is less latency upstream, even if a large packet gets in between the voice packets.

The MLP on LNS feature bundles together a virtual private dial network (VPDN) session on a single logical connection, which forms an MLP bundle on the LNS. Before the Cisco IOS 12.2(33)SB release, Cisco 10000 series routers supported only multilink bundle termination on the PPP termination aggregation (PTA) router. In the Cisco IOS 12.2(33)SB release, Cisco 10000 series routers support MLP termination on the LNS also. Figure 22-3 shows an MLP on LNS application.

About MLP on LNS

The multilink interface-based configuration requires one virtual template per bundle so that the multilink group # command can be configured on the virtual template. However, for the MLP on LNS feature, you can only scale up to 2000 virtual templates.

To address the virtual template scaling issue and to avoid cumbersome configuration management, in the Cisco IOS 12.2(33)SB release, virtual access bundles are supported. In virtual access bundles, the bundle interface is cloned from the virtual template when the first member link is negotiated on the LNS. The virtual access bundle support is limited to bundle termination on LNS.

Before the Cisco IOS 12.2(33)SB release, multilink interface-based configuration was used to distinguish between single and multi-member bundles. However, for the virtual access based bundle interface, you can no longer use the interface number range to distinguish between single and multi-member bundles because the bundles are generated dynamically in the Cisco IOS 12.2(33)SB release. To distinguish single and multi-member bundles, the user-specified value for the ppp multilinks max link # command is used.

The following two diagrams show two different MLP on LNS bundle configurations supported with the Cisco IOS 12.2(33)SB release. Figure 22-4 shows MLP on CPE for dial-up networks.

Figure 22-4 MLP on LNS-Multimember Bundle

Figure 22-5 shows a single-member bundle on the CPE. These are single-member bundles where the traffic received by the Cisco 10000 router is fragmented to interleave high-priority traffic in between low-priority network traffic.

Figure 22-5 MLP on LNS-Single-Member Bundle

To accommodate the scaling requirements of up to 2040 multi member and 10240 single-member bundles for the MLP on LNS feature, an additional reassembly buffer is reserved in the external column memory (XCM). The reassembly buffer that was reserved in the Cobalt space is used for multi-member bundles and the XCM reassembly buffer is used for single-member bundles.

The fixed reassembly table size for the MLP on LNS feature to buffer fragments is 256 entries. The reassembly table size restricts the maximum differential delay for all of the different paths for the member links from the CPE to the LNS. For example, if there are 10 members in a bundle, and one of the members is associated with a "slow" (high delay) path, then the other nine members must have their fragments/packets buffered while waiting for the slower link. Because the reassembly table stores descriptors, each entry represents one fragment or a whole packet if fragmentation is not in effect. The amount of time each fragment takes to get transmitted is equal to the configured fragment delay, which is independent of link bandwidth. If fragmentation is not in effect, the transmit time depends on the packet size, with smaller packets being slower. Therefore, the amount of tolerated differential delay is the reassembly table buffering limit for the other 9 links:

4An HQF resource that is used by the RP and PXF to program physical layer scheduling for an interface. It could be considered an instance of physical layer scheduling; Cisco 10000 series routers currently support 16K such instances. All bundle interfaces (single or multi-member bundles) use one instance of this resource. For single-member bundles the scheduling is done at the logical layer. All members of multi-member bundles are scheduled at the physical layer, so each member link in a multi-member bundle uses one instance.

PPP multilink links max Command

Support for the ppp multilink links max command is new in the Cisco IOS 12.2(33)SB release, to distinguish between single and multimember MLP on LNS bundles. The default maximum number of links for the Cisco 10000 series routers is 10. The ppp multilink links max 1 command is required for single-member bundles.

Performance and Scalability of MLP on LNS

The following commands allow for better scaling when used in configuring MLP on LNS:

•Configure the hold-queue commandin interface configuration mode for the trunk interfaces in which the L2TP tunnel is negotiated. For example:

Router(config-if)# hold-queue 4096 in

•Configure the following commands and recommended values on the virtual template interface:

Router(config-if)# ppp max-configure 110

Router(config-if)# ppp max-failure 100

Router(config-if)# ppp timeout retry 5

Router(config-if)# keepalive 30

•Configure the lcp renegotiation always command on the VPDN group to renegotiate between L2TP access concentrator (LAC) and LNS. The maximum number of multilink member links that can be configured on the Cisco 10000 series routers is up to 20440. Different combinations of bundle configurations can be configured on the box at any given time based on resource availability.

PXF Memory and Performance Impact for MLP on LNS

PXF performance is measured as follows:

•Packet buffer usage

The number of packet buffers available on the PRE3 is 832K small buffers (for packet sizes of 768 bytes or less) and 120K large buffers (for packet sizes greater than 768 bytes). With full scaling of 12280 bundles (2040 multilink and 10240 single link), the average number of buffers is 69.4 small buffers and 10.0 large buffers per bundle for a total of 79.4 buffers per bundle.

Each bundle includes 256 entries. However, in a single link bundle most packets arrive in order; therefore, fewer buffers are required per single link bundle. For example, if the average usage is 10 buffers per single link bundle, the average usage is 436.7 buffers per multilink bundle.

•Packet processing rate

The PRE3 has a rate of 10 million contexts per second, which is the rate of contexts or packets passing through the PXF complex. The packet processing rate is measured by the number of packets per second that can either be enqueued or dequeued by the PRE3. If each packet takes 2 passes to be enqueued, then the enqueue rate is 5 mcps. Because the enqueueing and dequeueing processing is performed concurrently, the overall performance is determined by the worst case between enqueue and dequeue as shown in the following sections.

If the packet processing demand exceeds the available contexts, nonpriority packets are dropped.

Note A fragmentation delay of 2ms requires a 16 byte fragment size. Because the MLP over L2TP header can be on the order of 50 bytes, a 16 byte fragment size is not possible.

Table 22-5 64-kbps Link Speed Performance

Links per multilink bundle

10

5

2

Total links (multi + single)

30640

20440

18400

Total context rate (million contexts/sec)

9.8

6.5

4.6

This scenario shows that for 64Kbps links with a maximum bundle scaling and high-demand traffic, the PXF can barely keep up with demand. Therefore, the total number of 64Kbps links should not exceed 20440.

This scenario shows that for 2-Mpbs links with high-traffic demand, Cisco 10000 series routers cannot obtain maximum bundle scaling. Therefore, we recommend that the total number of 2mbps links not exceed 4080.

Restrictions and Limitations for MLP on LNS

In Cisco IOS Release 12.2(33)SB, the MLP on LNS feature has the following restrictions:

•The MLP on LNS feature does not include SSO support.

•Bundles are only supported with Gigabit Ethernet and ATM as the trunk between the LAC and LNS.

•Because the bandwidth of the member link is received from the LAC through the Connect speed AV-Pair, L2TP sessions on a single link bundle are provisioned at the logical layer (HQF). The L2TP sessions on a multi-member MLP bundle are provisioned as physical links and are bundled at the physical layer (HQF). For multi-member bundles, the bandwidth received through the AV-Pair carves out the bandwidth from the physical/tunnel interface to reserve it for MLP.

•Oversubscription is not supported for MLP bundled L2TP members or on the underlying tunnel interface.

•All member L2TP sessions within the same bundle belong to the same physical interface and the same L2TP tunnel.

•QoS on multiple member MLP bundles is not supported. If any MLPoLNS bundles are negotiated on the Gigabit Ethernet or ATM VC interface, applying a service policy on the Gigabit Ethernet or ATM VC tunnel interface is also not supported.

•Each member link in a bundle has the same speed. We do not recommend or support configuring member links of different speeds.

•Fragmentation and interleaving on MLP on LNS bundles in the downstream direction are not supported.

•Locally terminated member links and member links forwarded from the LAC are not supported within the same bundle (although the setup is not prevented).

•Sessions from different tunnels are not allowed to join the same bundle. All members of a bundle must be part of the same L2TP tunnel and share the same physical interface.

•Multiclass MLP is not supported for MLP on LNS bundles.

•When one additional toaster phase per fragment is added to the MLP on LNS dequeue process, performance is impacted.

•Multilink interface based bundles for MLP on LNS are not supported.

•Virtual-access bundle support for existing MLP features is not included in this release.

•Due to changes in route or switching to backup because of problems on the line, dynamic changing of the physical tunnel interface (the Gigabit Ethernet and ATM on which the L2TP tunnel for MLPoLNS bundle is negotiated) can happen. These changes require the bundles to renegotiate.

•For multi-member bundles, carve out and reserve the bandwidth from the physical interface, which is the trunk interface on which the L2TP tunnel is negotiated. The bandwidth available for use on the trunk interface or other connection is reduced by the sum of the bandwidth reserved for the bundle.

MLPoE LAC Switching

In the Cisco IOS 12.2(33)SB release, MLP bundling on LNS was supported. In the Cisco IOS 12.2(33)SB2 release, there is added support for switching MLPoEoVLAN sessions received on the LAC to the LNS. However, due to PXF resource limitations, this feature is supported on the PRE3 platform only.

Restrictions for MLPoE LAC Switching

In Cisco IOS Release 12.2(33)SB2, the MLPoE LAC Switching feature has the following restrictions:

•MLPoVLAN encapsulation (between the CPE and LAC) is supported. MLPoEoE and MLPoEoQinQ is not supported.

•L2TP tunnel over Gigabit Ethernet (between the LAC and LNS) and ATM is supported. However, VLAN and QinQ encapsulations for the L2TP tunnel are not supported.

•Similar to the MLP on LNS feature, bundles are only supported with Gigabit Ethernet and ATM as the trunk between the LAC and LNS.

•QoS on interfaces towards the CPE and the tunnel is not supported.

•Only single-member MLPoE bundles are supported (with LFI support). The maximum number of single-member MLPoE bundles that can be supported is 10240.

MLPoE at PTA

In Cisco IOS Release 12.2(33)SB, MLPoE supports LFI on single-link MLP bundles. This support enables high priority and low-latency packets to be interleaved between fragments of lower-priority and higher-latency packets. Figure 22-6 shows a MLPoE DSL network using LFI.

Figure 22-6 MLPoE DSL Network with LFI

In the upstream direction, the CPE fragments non-priority packets and interleaves high priority packets between the fragments. In the downstream direction, the Cisco 10000 series router reassembles the fragmented non-priority packets. However, from Cisco IOS Release 12.2(33)XNE onwards, to reduce any delay in sending high-priority packets, the router processes high priority packets as soon as they arrive.

Point-to-Point Protocol over Ethernet (PPPoE) sessions in the MLPoE at PTA feature are handled as follows:

•All variations of PPPoE, such as PPPoEoE, PPPoEoA, PPPoEo802.1Q, and PPPoEoQinQ, are usable as member links for MLPoE bundles.

•Termination of a MLPoE bundle in a Virtual Routing and Forwarding (VRF) block is similar to terminating a PPPoE session in a VRF.

•MLPoE bundles are distinguished by the username that was used to authenticate the PPPoE session.

•MLPoE reassembles received fragmented MLP packets, but fragmentation is not performed in the transmit direction.

•PPPoE sessions are dynamically created and cannot be pre-configured similar to MLPoA for bundling with a multilink interface. MLPoE bundles are also dynamically created when a PPPoE session, with the multilink option enabled, is configured for a user for the first time.

ATM Overhead Accounting

Figure 22-6 shows that the outbound interface on the BRAS to the DSLAM is Ethernet. The encapsulation from DSLAM to CPE can be ATM. The overhead must be accounted at the BRAS to avoid the overrun at the subscriber line. The overhead is added by segmenting packets on the DSLAM. As a result, ATM overhead must be accounted for, by applying the traffic shape. Per-MLPoE bundle shaping with ATM overhead accounting is supported.

ATM overhead accounting for the MLPoE at PTA feature has the following restrictions:

•Overhead accounting is only supported on single-member MLPoE bundles.

•The line rate needs to be under-subscribed to prevent or reduce loss of traffic downstream.

•Overhead accounting is supported for bi-level service policies only.

•Overhead accounting support from Digital Subscriber Line Access Multiplexer (DSLAM) to CPE in the downstream direction can be applied at both logical and class levels.

•The outer-tag on the packet is a service-tag, or tag that identifies the DSLAM with the BRAS.

•The multilink interface CLI is not supported, because a MLPoE bundle is created dynamically. The username and endpoint discriminator decide how a link joins a bundle.

•An MLPoE bundle must have a shaped PPPoE session configured.

•The number of bundles and links that MLPoE can use depend on single-member bundles and links left unused by other MLP bundles, and vice versa. For example, if MLPoA is using 5000 single-member bundles with 5000 member-links, MLPoE can use up to only 5240 single-member bundles with 5240 member-links, because the single-member bundle pool is exhausted. For example, if MLPoA is using 2040 multi-member bundles with 10200 member-links (5 links per bundle), MLPoE can only use up to 10220 single-member bundles with 10220 member-links, because the member-link pool is exhausted.

Note The number of MLP bundles that can be brought up with the MLPoE at PTA feature depends upon the available system resources.

MLP-Based Link Fragmentation and Interleaving

MLP supports link fragmentation and interleaving (LFI). The MLP fragmentation mechanism multilink encapsulates large nonreal-time packets and fragments them into a small enough size to satisfy the delay requirements of real-time traffic. Smaller real-time packets are not multilink encapsulated. Instead, the MLP interleaving mechanism provides a special transmit queue (priority queue) for these delay-sensitive packets to allow the packets to be sent earlier than other packet flows. Real-time packets remain intact and MLP interleaving mechanisms send the real-time packets between fragments of the larger non- real-time packets.

For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Note In PRE1, Cisco 10000 series routers support fragmentation only on single link bundles when configured for LFI, using the ppp multilink interleave command. However, for multiple link bundles, the router does not support fragmentation and interleaving. You can turn off fragmentation by using the no ppp multilink fragmentation command on the Cisco 10000 router and the peer end.

Configuring MLP Bundles and Member Links

Table 22-8 shows the components you must define when configuring MLP (without link fragmentation and interleaving) on specific interface types.

1A service policy is required only when configuring MLP-based link fragmentation and interleaving (LFI) for Single-VC or Multi-VC MLP over ATM. For MLP-based LFI, a service policy with a priority queue defined must be attached to the multilink interface. The VC does not require a service policy.

To configure MLP bundles and member links, perform the following configuration tasks:

Enabling MLP on a Virtual Template

The virtual template interface is attached to the member links, not to the MLP bundle. You can apply the same virtual template to the member links; you are not required to apply a unique virtual template to each member link.

To enable MLP on a virtual template, enter the following commands beginning in global configuration mode:

Command

Purpose

Step 1

Router(config)# interface virtual-templatenumber

Creates or modifies a virtual template interface that can be configured and applied dynamically to virtual access interfaces. Enters interface configuration mode.

number is a number that identifies the virtual template interface. You can configure up to 5061 total virtual template interfaces (requires Cisco IOS Release 12.2(28)SB and later releases).

Step 2

Router(config-if)# ppp max-configureretries

Specifies the maximum number of configure requests to attempt before stopping the requests due to no response.

retries specifies the maximum number of retries. Valid values are from 1 to 255. The default is 10 retries. We recommend 110 retries.

Step 3

Router(config-if)# ppp max-failure retries

Configures the maximum number of consecutive Configure Negative Acknowledgements (CONFNAKs) to permit before terminating a negotiation.

retries is the maximum number of retries. Valid values are from 1 to 255. The default is 5 retries. We recommend 100 retries.

Step 4

Router(config-if)# ppp timeout retryresponse-time

Sets the maximum time to wait for Point-to-Point Protocol (PPP) negotiation messages.

response-time specifies the maximum time, in seconds, to wait for a response during PPP negotiation. We recommend 5 seconds.

Step 5

Router(config-if)# keepalive [period]

Enables keepalive packets to be sent at the specified time interval to keep the interface active.

period specifies a time interval, in seconds. The default is 10 seconds. We recommend 30 seconds.

Step 6

Router(config-if)# no ip address

Removes an IP address.

Step 7

Router(config-if)# ppp multilink

Enables MLP on the virtual template interface.

Configuration Example for Enabling MLP on a Virtual Template

Example 22-2 shows a sample configuration for enabling MLP on a virtual template.

Example 22-2 Enabling MLP on a Virtual Template

Router(config)# interface virtual-template1

Router(config-if)# ppp max-configure 110

Router(config-if)# ppp max-failure 100

Router(config-if)# ppp timeout retry 5

Router(config-if)# keepalive 30

Router(config-if)# no ip address

Router(config-if)# ip mroute-cache

Router(config-if)# ppp authentication chap

Router(config-if)# ppp multilink

Router(config-if)# exit

Adding a Serial Member Link to an MLP Bundle

You can configure up to 10 serial member links per MLP bundle. When adding T1 member links, add only full T1 interfaces. If the interface you add to the MLP bundle contains information such as an IP address, routing protocol, or access control list, the router ignores that information. If you remove the interface from the MLP bundle, that information becomes active again.

To add serial member links to an MLP bundle, enter the following commands beginning in global configuration mode:

Specifies the interface that you want to add to the MLP bundle. Enters interface configuration mode.

slot/module/port identifies the line card. The slashes are required.

channel: is the channel group number. The colon is required.

controller-number is the member link controller number.

Step 2

Router(config-if)# hold-queue length {in | out}

Limits the size of the IP output queue on an interface. We recommend that you configure this command on all physical interfaces.

length is a number that specifies the maximum number of packets in the queue. Valid values are from 0 to 4096. We recommend 4096 packets for all line cards. By default, the input queue is 75 packets and the output queue is 40 packets.

in specifies the input queue.

out specifies the output queue.

Step 3

Router(config-if)# ppp max-configureretries

Specifies the maximum number of configure requests to attempt before stopping the requests due to no response.

retries specifies the maximum number of retries. Valid values are from 1 to 255. The default is 10 retries. We recommend 110 retries.

Step 4

Router(config-if)# ppp max-failure retries

Configures the maximum number of consecutive Configure Negative Acknowledgements (CONFNAKs) to permit before terminating a negotiation.

retries is the maximum number of retries. Valid values are from 1 to 255. The default is 5 retries. We recommend 100 retries.

Step 5

Router(config-if)# ppp timeout retryresponse-time

Sets the maximum time to wait for Point-to-Point Protocol (PPP) negotiation messages.

response-time specifies the maximum time, in seconds, to wait for a response during PPP negotiation. We recommend 5 seconds.

Step 6

Router(config-if)# keepalive [period]

Enables keepalive packets to be sent at the specified time interval to keep the interface active.

period specifies a time interval, in seconds. The default is 10 seconds. We recommend 30 seconds.

Step 7

Router(config-if)# ppp chap hostnamehostname

(Optional) Identifies the hostname sent in the Challenge Handshake Authentication Protocol (CHAP) challenge.

hostname is the name of the bundle group. This name uniquely identifies the bundle.

Note If you configure this command on the bundle and its member links, specify the same identifier for both the bundle and the member links.

group-number is a nonzero number that identifies the multilink group. Valid values are from 1 to 9999.

The group-number must be identical to the specified multilink-bundle-number of the MLP bundle to which you want to add this link.

Adding an ATM Member Link to an MLP Bundle

You can configure up to 10 member links per MLP bundle for Multi-VC MLP over ATM. However, you can configure only one member link per MLP bundle for Single-VC MLP over ATM.

To add ATM member links to an MLP bundle, enter the following commands beginning in global configuration mode:

Command

Purpose

Step 1

Router(config)# interface atmslot/module/port

Configures or modifies the ATM interface you specify and enters interface configuration mode.

Step 2

Router(config-if)# hold-queue length {in | out}

Limits the size of the IP output queue on an interface.

length is a number that specifies the maximum number of packets in the queue. Valid values are from 0 to 4096. We recommend 4096 packets for all line cards. By default, the input queue is 75 packets and the output queue is 40 packets.

in specifies the input queue.

out specifies the output queue.

Note We recommend that you configure this command on all physical interfaces, except when using the ATM OC-12 line card.

aal5muxppp specifies the AAL and encapsulation type for multiplex (MUX)-type VCs. The keyword ppp isInternet Engineering Task Force (IETF)-compliant PPP over ATM. It specifies the protocol type being used by the MUX encapsulated VC. Use this protocol type for Multi-VC MLP over ATM to identify the virtual template. This protocol is supported on ATM PVCs only.

virtual-templatenumber is the number used to identify the virtual template.

Step 9

Router(config-if-atm-vc)# protocol ppp virtual-templatenumber

Enables PPP sessions to be established over the ATM PVC using the configuration from the virtual template you specify. Use this command only if you specified aal5snap as the encapsulation type and you are configuring MLP on multiple VCs.

number is a nonzero number that identifies the virtual template that you want to apply to this ATM PVC.

Step 10

Router(config-if-atm-vc)# ppp multilink groupgroup-number

Associates the PVC with an MLP bundle.

group-number is a nonzero number that identifies the multilink group. Valid values are:

•Single-VC MLP over ATM—10,000 and higher.

•Multi-VC MLP over ATM—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

The group-number must be identical to the specified multilink-bundle-number of the MLP bundle to which you want to add this link.

Configuration Example for Adding ATM Links to an MLP Bundle

Example 22-3 shows how to add ATM links to an MLP bundle. In the example, the virtual template named Virtual-Template 1 is applied to PVCs 0/34, 0/35, and 0/36. Each of these PVCs is assigned to MLP bundle group 1. Notice that all of the member links have the same encapsulation type. The router does not support member links with different encapsulation types.

Example 22-3 Adding ATM Links to an MLP Bundle

Router(config)# interface Multilink 1

Router(config-if)# ip address 10.6.6.1 255.255.255.0

Router(config-if)# ppp multilink

Router(config-if)# ppp multilink group 1

!

Router(config)# interface virtual-template1

Router(config-if)# ppp max-configure 110

Router(config-if)# ppp max-failure 100

Router(config-if)# ppp timeout retry 5

Router(config-if)# keepalive 30

Router(config-if)# no ip address

Router(config-if)# ppp multilink

!

Router(config)# interface atm 6/0/0

Router(config-if)# no ip address

Router(config-if)# hold-queue 4096 in

!

Router(config)# interface atm 6/0/0.1 point-to-point

Router(config-if)# no ip address

Router(config-if)# pvc 0/34

Router(config-if-atm-vc)# vbr-nrt 512 256 20

Router(config-if-atm-vc)# encapsulation aal5snap

Router(config-if-atm-vc)# protocol ppp Virtual-Template 1

Router(config-if-atm-vc)# ppp multilink group 1

!

Router(config)# interface atm 6/0/0.2 point-to-point

Router(config-if)# no ip address

Router(config-if)# pvc 0/35

Router(config-if-atm-vc)# vbr-nrt 512 256 20

Router(config-if-atm-vc)# encapsulation aal5snap

Router(config-if-atm-vc)# protocol ppp Virtual-Template 1

Router(config-if-atm-vc)# ppp multilink group 1

!

Router(config)# interface ATM 6/0/0.3 point-to-point

Router(config-if)# no ip address

Router(config-if)# pvc 0/36

Router(config-if-atm-vc)# vbr-nrt 512 256 20

Router(config-if-atm-vc)# encapsulation aal5snap

Router(config-if-atm-vc)# protocol ppp Virtual-Template 1

Router(config-if-atm-vc)# ppp multilink group 1

Moving a Member Link to a Different MLP Bundle

To move a member link to a different MLP bundle, enter the following commands beginning in interface configuration mode:

Command

Purpose

Step 1

Router(config)# interface type number

Specifies the interface that you want to move to a different MLP bundle. Enters interface or subinterface configuration mode.

type specifies the type of interface (for example, ATM).

number specifies the interface number and is the slot/module/port.subinterface number or the slot/module/port.channel:controller-number of the interface (for example, ATM 1/0/0.1).

Step 2

Router(config-if)# ppp chap hostnamehostname

(Optional) Identifies the hostname sent in the Challenge Handshake Authentication Protocol (CHAP) challenge.

hostname is the name of the bundle group. This name uniquely identifies the bundle.

Note If you configure this command on the bundle and its member links, specify the same identifier for both the bundle and the member links.

Step 3

Router(config-if)# ppp multilink groupgroup-number

Moves this interface to the MLP bundle you specify.

group-number identifies the multilink group. Change this group-number to the new MLP group group-number. Valid values are:

•MLP over Serial—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

•Single-VC MLP over ATM—10,000 and higher.

•Multi-VC MLP over ATM—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

Removing a Member Link from an MLP Bundle

To remove a member link from an MLP bundle, enter the following commands beginning in global configuration mode:

Command

Purpose

Step 1

Router(config-if)# interface type number

Specifies the member link that you want to remove from the MLP bundle. Enters interface configuration mode.

type specifies the type of interface (for example, ATM).

number specifies the interface number and is the slot/module/port.subinterface number or the slot/module/port.channel:controller-number of the interface (for example, ATM 1/0/0.1).

Step 2

Router(config-if)# noppp multilink groupgroup-number

Removes the member link from the MLP group.

group-number is the number of the MLP group from which you want to remove the member link.

Step 3

Router(config-if)# no ppp multilink

Disables multilink for the link.

Step 4

Router(config-if)# no ppp chap hostname

Removes PPP authentication.

Changing the Default Endpoint Discriminator

When the local system negotiates using MLP with the peer system, the default endpoint discriminator value provided is the username that is used for authentication. The ppp chap hostname or ppp pap sent-username command is used to configure the username for the interface or the username defaults to the globally configured hostname.

To change the default endpoint discriminator, enter the following command in interface configuration mode:

Overrides or changes the default endpoint discriminator the system uses when negotiating the use of MLP with the peer system.

hostname indicates to use the hostname configured for the router. This is useful when multiple routers are using the same username to authenticate, but have different hostnames.

ipip-address indicates to use the supplied IP address.

maclan-interface indicates to use the specified LAN interface whose MAC address is to be used.

none causes negotiation of the Link Control Protocol (LCP) without requesting the endpoint discriminator option, which is useful when the router connects to a malfunctioning peer system that does not handle the endpoint discriminator option properly.

phonetelephone-number indicates to use the specified telephone number. Accepts E.164-compliant, full international telephone numbers.

stringchar-string indicates to use the supplied character string.

Configuration Example for Changing the Endpoint Discriminator

Example 22-4 shows how to change the MLP endpoint discriminator from the default CHAP hostname C-host1 to the hostname cambridge.

Configuration Example for Configuring MLP over Serial Interfaces

Example 22-5 shows a sample configuration for configuring MLP over serial interfaces. In the example, 1/0/0/1:0 and 1/0/0/2:0 subinterfaces are added to the Multilink1 bundle.

Example 22-5 Configuring MLP on Serial Interfaces

interface Multilink1

ip address 100.1.1.1 255.255.255.0

no keepalive

ppp multilink

ppp multilink group 1

!

interface serial 1/0/0/1:0

no ip address

encapsulation ppp

ppp chap hostname m1

ppp multilink

ppp multilink group 1

!

interface serial 1/0/0/2:0

no ip address

encapsulation ppp

ppp chap hostname m1

ppp multilink

ppp multilink group 1

Configuration Example for Configuring Single-VC MLP over ATM

Example 22-6 shows a sample configuration for configuring Single-VC MLP over ATM. In the example, PVC 0/36 on ATM subinterface 5/0/0.3 is added to a single link MLP bundle. Single-VC MLP over ATM is configured on remote sites to deploy LFI for protecting interactive traffic on low speed ATM VCs.

Example 22-6 Configuring Single-VC MLP over ATM VCs

interface ATM5/0/0

no ip address

no atm ilmi-keepalive

interface ATM5/0/0.3 point-to-point

pvc 0/36

vbr-nrt 512 612

encapsulation aal5mux ppp Virtual-Template1

ppp multilink group 10001

interface Virtual-Template1

bandwidth 512

no ip address

ppp multilink

interface Multilink 10001

ip address <ip address>

ppp multilink

ppp multilink group 10001

Configuration Example for Configuring Multi-VC MLP over ATM

Example 22-7 shows a sample configuration for configuring Multi-VC MLP over ATM. In the example, PVC 0/36 on ATM subinterface 5/0/0.3 and PVC 0/37 on ATM subinterface 5/0/0.4 are added to the Multilink2 bundle. The virtual template named Virtual-Template1 is applied to PVC 0/36 and PVC 0/37.

Example 22-7 Configuring Multi-VC MLP over ATM VCs

interface Multilink2

ip address 100.1.2.1 255.255.255.0

ppp multilink

ppp multilink group 2

!

interface ATM5/0/0

no ip address

no atm ilmi-keepalive

!

interface ATM5/0/0.3 point-to-point

pvc 0/36

ppp chap hostname m2

ppp multilink group 2

vbr-nrt 128 64 20

encapsulation aal5mux ppp Virtual-Template1

!

interface ATM5/0/0.4 point-to-point

pvc 0/37

ppp chap hostname m2

ppp multilink group 2

vbr-nrt 128 64 20

encapsulation aal5mux ppp Virtual-Template1

!

interface Virtual-Template1

no ip address

no keepalive

ppp max-configure 110

ppp max-failure 100

ppp multilink

ppp timeout retry 5

!

Configuration Example for MLP on LNS

Example 22-8 shows how to set up a tunnel on the GigabitEthernet interface on which the VPDN member links are negotiated and added to the MLP bundle cloned from virtual template 500.

Example 22-8 MLP on LNS

aaa new-model

!

!

aaa authentication ppp default local

aaa authentication ppp TESTME group radius

aaa authorization network default local

aaa authorization network TESTME group radius

!

aaa session-id common

buffers small perm 15000

buffers mid perm 12000

buffers big perm 8000

!

vpdn enable

!

vpdn-group LNS_1

accept-dialin

protocol l2tp

virtual-template 500

terminate-from hostname LAC1-1

local name LNS1-1

lcp renegotiation always

l2tp tunnel receive-window 100

L2tp tunnel password 0 cisco

l2tp tunnel nosession-timeout 30

l2tp tunnel retransmit retries 7

l2tp tunnel retransmit timeout min 2

l2tp tunnel retransmit timeout max 8

!

!

interface GigabitEthernet2/0/0

ip address 210.1.1.3 255.255.255.0

negotiation auto

hold-queue 4096 in

!

!

interface Virtual-Template500

ip unnumbered Loopback1

peer default ip address pool pool-1

ppp mtu adaptive

ppp timeout authentication 100

ppp max-configure 110

ppp max-failure 100

ppp timeout retry 5

keepalive 30

ppp authentication pap TESTME

ppp authorization TESTME

ppp multilink

!

ip local pool pool-1 1.1.1.1 1.1.1.100

radius-server host 15.1.0.100 auth-port 1645 acct-port 1646 key cisco

radius-server retransmit 0

Configuration Example for MLPoE LAC Switching

Example 22-9 shows how to configure the LAC for switching an MLPoE connection to the LNS, while also forwarding the DSL tags.

Example 22-9 MLPoE LAC Switching

aaa new-model

!

multilink bundle-name authenticated

vpdn enable

!

vpdn-group LACoe_LFI

request-dialin

protocol l2tp

domain hello_oe

dsl-line-info-forwarding

initiate-to ip 192.168.125.54

local name LACoe_LFI

l2tp tunnel password 0 lab

!

username LNSoe_LFI nopassword

!

bba-group pppoe global

virtual-template 800

vendor-tag dsl-sync-rate service

!

interface GigabitEthernet4/0/0

no ip address

negotiation auto

!

interface GigabitEthernet4/0/0.1

encapsulation dot1Q 800

pppoe enable group global

!

interface GigabitEthernet4/1/0

ip address 192.168.125.53 255.255.255.0

negotiation auto

!

interface Virtual-Template800

no peer default ip address

keepalive 30

ppp authentication pap

ppp multilink

ppp multilink links maximum 1

!

Configuration Examples of MLPoE at PTA

This section has the following configuration examples of the MLPoE at PTA feature:

Note This command currently displays statistics for system traffic only. Statistics for bundle traffic do not display. For information about bundle traffic, see the show interfaces or show ppp multilink command.

If you specify bundle-interface, the command displays information for only that specific bundle.

Router# show running-config

Displays information about the current router configuration, including information about each interface configuration.

Bundle Counters and Link Counters

When you enter the show interface command on an MLP bundle interface and on all of its member link interfaces, you might expect the counters on the bundle to be equal to the sum of the counters for all of the link interfaces. However, this is not the case.

The statistics for the various interfaces reflect the data that actually goes through those interfaces. The data that goes through the bundle is different from the data going through the links. All of the traffic at the bundle level does eventually pass through the link level, but it is not in the same format. In addition, links also carry traffic that is private to that link, such as link-level keepalives.

The following list describes some of the reasons link-level and bundle-level counts might be different (ignoring the link-private traffic):

•Multilink fragmentation might be occurring. A single packet at the bundle level becomes multiple packets at the link level.

•Frames at the bundle level include only bundle-level encapsulation, which consists of a 2-byte PPP header (or 1-byte header under some circumstances).

•Frames at the link level include link level encapsulation bytes, which include all forms of media-specific encapsulation and framing. This information includes headers and trailers for High-Level Data Link Control (HDLC) and PPP over ATM. The link-level encapsulation bytes also include multilink subheaders (for example, sequence numbers), if they are used.

Note Multilink subheaders are not part of the packet encapsulation because it exists at the bundle level. Multilink subheaders are part of the encapsulation that is added to fragments before placing them on the link; they are not added to the network-level datagrams (for example, IP packets) before sending them to the fragmentation engine.

Because of the factors listed above, the counts on the links can be greater than the counts on the bundle. The link level has a great deal of overhead that is not visible at the bundle level.

Verification Example for the show interfaces multilink Command

Example 22-12 shows sample output for the show interfaces multilink command. In the example, configuration information and packet statistics display for the MLP bundle 8.

Example 22-12 Sample Output for the show interfaces multilink Command

Router# show interfaces multilink 8

Multilink8 is up, line protocol is up

Hardware is multilink group interface

Internet address is 10.1.1.1/24

MTU 1500 bytes, BW 15360 Kbit, DLY 100000 usec, rely 255/255, load

1/255

Encapsulation PPP, crc 16, loopback not set

Keepalive not set

DTR is pulsed for 2 seconds on reset

LCP Open, multilink Open

Open:IPCP

Last input 15:24:43, output never, output hang never

Last clearing of "show interface" counters 15:27:59

Queueing strategy:fifo

Output queue 0/40, 0 drops; input queue 0/75, 0 drops

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

36 packets input, 665 bytes, 0 no buffer

Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

31 packets output, 774 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 output buffer failures, 0 output buffers swapped out

0 carrier transitions

Verification Example for the show ppp multilink Command

Example 22-13 shows sample output from the show ppp multilink command. In the example, information about the MLP over ATM bundle (Multilink3) displays first. Information about the member links then displays, including the number of active and inactive member links. Class fields are omitted from the output; everything is implicitly in receive class 0 and transmit class 0.

The following list describes the bundle-level fields and lines in the show ppp multilink command output:

•Bundle name is name—The bundle identifier for the bundle.

•Bundle up for time—The elapsed time since the bundle first came up.

•load n/255—The traffic load on the bundle as multilink computes loads for bandwidth-on-demand purposes. This load might count all traffic, or just inbound or outbound traffic, depending on the configuration.

•Receive buffer limit n bytes—The maximum amount of fragment data that multilink can buffer in its fragment reassembly engine for each receive class. This amount is derived from the configured slippage constraints.

•Frag timeout n ms—The maximum amount of time that multilink waits for an expected fragment before declaring it lost. This limit applies only when fragment loss cannot be detected by other, faster means such as sequence number-based detection.

•Member links:—The number of active and inactive links currently in the bundle, followed by the desired minimum and maximum number of links. The actual number might be outside the range.

After all of the bundle parameters display, information about each individual link in the bundle displays. Extra link-level parameters might be shown after each link in certain circumstances. The following list describes the individual link parameters:

•Weight—The weight is used for load balancing. Data is distributed between the member links in proportion to their weight. The weight is proportional to multilink's notion of the effective bandwidth of a link. Therefore, multilink effectively distributes data to the links in proportion to their bandwidth.

The effective bandwidth of a link is the configured bandwidth value, except on asynchronous lines where multilink uses a value that is 0.8 times the configured bandwidth setting. This exception occurs because, on an asynchronous line, at best only 8/10 of the raw bandwidth is available for transmitting real data and the remainder is consumed in framing overhead.

Previously, the weight also controlled the size of the fragments generated for that link. However, Cisco IOS software now computes a separate fragment size value.

•Frag size—The size of the largest fragment that can be generated for that link. It is the size of the MLP payload carried by a fragment and does not include MLP headers or link-level framing.

•Unsequenced—The serial link is unsequenced and packets can arrive in a different order than the peer transmitted them. To compensate for this, multilink relaxes its lost fragment detection mechanisms.

•Receive only (or receive only pending)—The link is in idle mode or is about to be put in idle mode. Processing of arriving data on the link continues normally, but data is not transmitted on the link. The remote system is expected to not send data on the link.

Verification Example for the show interfaces multilink stat Command

Example 22-14 shows sample output for the show interfaces multilink stat command. In the example, the number of packets in and out display for each of the specified switching paths.

Related Documentation

This section provides hyperlinks to additional Cisco documentation for the features discussed in this chapter. To display the documentation, click the document title or a section of the document highlighted in blue. When appropriate, paths to applicable sections are listed below the documentation title.