Frame Relay is an industry-standard, switched data link layer protocol
that handles multiple virtual circuits using High-Level Data Link Control
(HDLC) encapsulation between connected devices. In many cases, Frame Relay is
more efficient than X.25, the protocol for which it is generally considered a
replacement. The following figure illustrates a Frame Relay frame (ANSI
T1.618).

Note in the above figure, Q.922 addresses, as presently defined, are
two octets and contain a 10-bit data-link connection identifier (DLCI). In some
networks Q.922 addresses may optionally be increased to three or four octets.

The "flag" fields delimit the beginning and end of the frame. Following
the leading "flag" field are two bytes of address information. Ten bits of
these two bytes make up the actual circuit ID (called the DLCI, for data-link
connection identifier).

The 10-bit DLCI value is the heart of the Frame Relay header. It
identifies the logical connection that is multiplexed into the physical
channel. In the basic (that is, not extended by the Local Management Interface
[LMI]) mode of addressing, DLCIs have local significance; that is, the end
devices at two different ends of a connection may use a different DLCI to refer
to that same connection.

This document is not restricted to specific software and hardware
versions.

The information presented in this document was created from devices in
a specific lab environment. All of the devices used in this document started
with a cleared (default) configuration. If you are working in a live network,
ensure that you understand the potential impact of any command before using
it.

Frame Relay was originally conceived as a protocol for use over ISDN
interfaces. Initial proposals to this effect were submitted to the
International Telecommunication Union Telecommunication Standardization Sector
(ITU-T) (formerly the Consultative Committee for International Telegraph and
Telephone [CCITT]) in 1984. Work on Frame Relay was also undertaken in the
ANSI-accredited T1S1 standards committee in the United States.

In 1990, Cisco Systems, StrataCom, Northern Telecom, and Digital
Equipment Corporation formed a consortium to focus Frame Relay technology
development and accelerate the introduction of inter operable Frame Relay
products. They developed a specification conforming to the basic Frame Relay
protocol being discussed in T1S1 and ITU-T, but extended it with features that
provide additional capabilities for complex internetworking environments. These
Frame Relay extensions are referred to collectively as the LMI. This is the
"cisco" LMI in the router as opposed to the "ansi" or "q933a" LMI.

Frame Relay provides a packet-switching data communications capability
that is used across the interface between user devices (such as routers,
bridges, host machines) and network equipment (such as switching nodes). User
devices are often referred to as data terminal equipment (DTE), while network
equipment that interfaces to DTE is often referred to as data
circuit-terminating equipment (DCE). The network providing the Frame Relay
interface can be either a carrier-provided public network or a network of
privately owned equipment serving a single enterprise.

Frame Relay differs significantly from X.25 in its functionality and
format. In particular, Frame Relay is a more streamlined protocol, facilitating
higher performance and greater efficiency.

As an interface between user and network equipment, Frame Relay
provides a means for statistically multiplexing many logical data conversations
(referred to as virtual circuits) over a single physical transmission link.
This contrasts with systems that use only time-division-multiplexing (TDM)
techniques for supporting multiple data streams. Frame Relay's statistical
multiplexing provides more flexible and efficient use of available bandwidth.
It can be used without TDM techniques or on top of channels provided by TDM
systems.

Another important characteristic of Frame Relay is that it exploits the
recent advances in wide-area network (WAN) transmission technology. Earlier WAN
protocols, such as X.25, were developed when analog transmission systems and
copper media were predominant. These links are much less reliable than the
fiber media/digital transmission links available today. Over links such as
these, link-layer protocols can forego time-consuming error correction
algorithms, leaving these to be performed at higher protocol layers. Greater
performance and efficiency is therefore possible without sacrificing data
integrity. Frame Relay is designed with this approach in mind. It includes a
cyclic redundancy check (CRC) algorithm for detecting corrupted bits (so the
data can be discarded), but it does not include any protocol mechanisms for
correcting bad data (for example, by retransmitting it at this level of
protocol).

Another difference between Frame Relay and X.25 is the absence of
explicit, per-virtual-circuit flow control in Frame Relay. Now that many
upper-layer protocols are effectively executing their own flow control
algorithms, the need for this functionality at the link layer has diminished.
Frame Relay, therefore, does not include explicit flow control procedures that
duplicate those in higher layers. Instead, very simple congestion notification
mechanisms are provided to allow a network to inform a user device that the
network resources are close to a congested state. This notification can alert
higher-layer protocols that flow control may be needed.

Once you have reliable connections to the local Frame Relay switch at
both ends of the permanent virtual circuit (PVC), then it is time to start
planning the Frame Relay configuration. In this first example, the Local
Management Interface (LMI)-type defaults to "cisco" LMI on Spicey. An interface
is by default a "multipoint" interface so, frame-relay
inverse-arp is on (for point-to-point, there is no Inverse ARP).
IP split horizon checking is disabled by default for Frame Relay encapsulation,
so routing updates come in and out the same interface. The routers learn the
data-link connection identifiers (DLCIs) they need to use from the Frame Relay
switch via LMI updates. The routers then Inverse ARP for the remote IP address
and create a mapping of local DLCIs and their associated remote IP addresses.

In this example, the router learns which data-link connection
identifiers (DLCIs) it uses from the Frame Relay switch and assigns them to the
main interface. Then the router will Inverse ARP for the remote IP address.

Note: You will not be able to ping Prasit's serial IP address from Aton
unless you explicitly add in Frame Relay maps on each end. If routing is
configured correctly, traffic originating on the LANs should not have a
problem. You will be able to ping if you use the Ethernet IP address as the
source address in an extended ping.

When frame-relay inverse-arp is enabled,
broadcast IP traffic will go out over the connection by
default.

You cannot ping from one spoke to another spoke in a hub and spoke
configuration using multipoint interfaces because there is no mapping for the
other spokes' IP addresses. Only the hub's address is learned via the Inverse
Address Resolution Protocol (IARP). If you configure a static map using the
frame-relay map command for the IP address of a remote spoke to use the local
data link connection identifier (DLCI), you can ping the addresses of other
spokes.

Frame Relay subinterfaces provide a mechanism for supporting partially
meshed Frame Relay networks. Most protocols assume transitivity on a logical
network; that is, if station A can talk to station B, and station B can talk to
station C, then station A should be able to talk to station C directly.
Transitivity is true on LANs, but not on Frame Relay networks unless A is
directly connected to C.

Additionally, certain protocols, such as AppleTalk and transparent
bridging, cannot be supported on partially meshed networks because they require
"split horizon" in which a packet received on an interface cannot be
transmitted out the same interface even if the packet is received and
transmitted on different virtual circuits.

Configuring Frame Relay subinterfaces ensures that a single physical
interface is treated as multiple virtual interfaces. This capability allows us
to overcome split horizon rules. Packets received on one virtual interface can
now be forwarded out another virtual interface, even if they are configured on
the same physical interface.

Subinterfaces address the limitations of Frame Relay networks by
providing a way to subdivide a partially meshed Frame Relay network into a
number of smaller, fully meshed (or point-to-point) subnetworks. Each
subnetwork is assigned its own network number and appears to the protocols as
if it is reachable through a separate interface. (Note that point-to-point
subinterfaces can be unnumbered for use with IP, reducing the addressing burden
that might otherwise result).

The following hub and spoke sample configuration shows two
point-to-point subinterfaces and uses dynamic address resolution on one remote
site. Each subinterface is provided with an individual protocol address and
subnetmask, and the interface-dlci command
associates the subinterface with a specified data-link connection identifier
(DLCI). Addresses of remote destinations for each point-to-point subinterface
are not resolved since they are point-to-point and traffic must be sent to the
peer at the other end. The remote end (Aton) uses Inverse ARP for its mapping
and the main hub responds accordingly with the IP address of the subinterface.
This occurs because Frame Relay Inverse ARP is on by default for multipoint
interfaces.

Dynamic address mapping uses Frame Relay Inverse ARP to request the
next hop protocol address for a specific connection, given a data-link
connection identifier (DLCI). Responses to Inverse ARP requests are entered in
an address-to-DLCI mapping table on the router or access server; the table is
then used to supply the next hop protocol address or the DLCI for outgoing
traffic.

Since the physical interface is now configured as multiple
subinterfaces, you must provide information that distinguishes a subinterface
from the physical interface and associates a specific subinterface with a
specific DLCI.

Inverse ARP is enabled by default for all protocols it supports, but
can be disabled for specific protocol-DLCI pairs. As a result, you can use
dynamic mapping for some protocols and static mapping for other protocols on
the same DLCI. You can explicitly disable Inverse ARP for a protocol-DLCI pair
if you know the protocol is not supported on the other end of the connection.
Because Inverse ARP is enabled by default for all protocols that it supports,
no additional command is required to configure dynamic address mapping on a
subinterface. A static map links a specified next hop protocol address to a
specified DLCI. Static mapping removes the need for Inverse ARP requests; when
you supply a static map, Inverse ARP is automatically disabled for the
specified protocol on the specified DLCI. You must use static mapping if the
router at the other end either does not support Inverse ARP at all or does not
support Inverse ARP for a specific protocol that you want to use over Frame
Relay.

If you do not have the IP address space to use many subinterfaces, you
can use IP unnumbered on each subinterface. If this is the case, you need to
use static routes or dynamic routing so that your traffic is routed as usual,
and you must use point-to-point subinterfaces.

You may want to back up Frame Relay circuits using ISDN. There are
several ways to do this. The first, and probably the best, is to use floating
static routes that route traffic to a Basic Rate Interface (BRI) IP address and
use an appropriate routing metric. You can also use a backup interface on the
main interface or on a per-data-link connection identifier (DLCI) basis. It may
not help much to back up the main interface because you could lose permanent
virtual circuits (PVCs) without the main interface going down. Remember, the
protocol is being exchanged with the local Frame Relay switch, not the remote
router.

Now let's assume that Spicey is the central side and that Prasit is the
side making connections to the central side (Spicey). Take care that you only
add the backup commands to the side that is calling the central side.

Note: Backup load is not supported on subinterfaces. As we do not track
traffic levels on subinterfaces, no load is calculated.

Here is an example of a hub and spoke per DLCI backup configuration.
The spoke routers are calling the hub router. As you can see, we allow only one
B channel per side by using the max-link option on the dialer pool on the hub
side.

Note: Backup load is not supported on subinterfaces. As we do not track
traffic levels on subinterfaces, no load is calculated.

Frame Relay switching is a means of switching packets based on the
data-link connection identifier (DLCI). We can look on this as the Frame Relay
equivalent of a Media Access Control (MAC) address. You perform switching by
configuring your Cisco router or access server into a Frame Relay network.
There are two parts to a Frame Relay network:

Note: In Cisco IOS Software release 12.1(2)T and later, the
frame route command has been replaced by the
connect command.

Let's look at a sample configuration. In the configuration below, we
are using the router America as a Frame Relay switch. We are using Spicey as a
hub router and Prasit and Aton as spoke routers. We have connected them as
follows:

Data-link connection identifier (DLCI) prioritization is the process
whereby different traffic types are placed upon separate DLCIs so that a Frame
Relay network can provide a different committed information rate for each
traffic type. It can be used in conjunction with either custom queuing or
priority queuing to provide bandwidth management control over the access link
to the Frame Relay network. In addition, some Frame Relay service providers and
Frame Relay switches (such as the Stratacom Internetwork Packet Exchange [IPX],
IGX and BPX or AXIS switches) actually provide prioritization within the Frame
Relay cloud based on this priority setting.

Broadcast queue is a major feature that is used in medium to large IP
or IPX networks where routing and service access point (SAP) broadcasts must
flow across the Frame Relay network. The broadcast queue is managed
independently of the normal interface queue, has its own buffers, and has a
configurable size and service rate. This broadcast queue is not used for
bridging spanning-tree updates (BPDUs) because of timing sensitivities. These
packets will flow through the normal queues. The interface command to enable
broadcast queue follows:

frame-relay broadcast-queue size byte-rate packet-rate

A broadcast queue is given a maximum transmission rate (throughput)
limit measured in bytes per second and packets per second. The queue is
serviced to ensure that only this maximum is provided. The broadcast queue has
priority when transmitting at a rate below the configured maximum, and hence
has a guaranteed minimum bandwidth allocation. The two transmission rate limits
are intended to avoid flooding the interface with broadcasts. The actual limit
in any second is the first rate limit that is reached. Given the transmission
rate restriction, additional buffering is required to store broadcast packets.
The broadcast queue is configurable to store large numbers of broadcast
packets. The queue size should be set to avoid loss of broadcast routing update
packets. The exact size depends on the protocol being used and the number of
packets required for each update. To be safe, the queue size should be set so
that one complete routing update from each protocol and for each data-link
connection identifier (DLCI) can be stored. As a general rule, start with 20
packets per DLCI. The byte rate should be less than both of the following:

N/4 times the minimum remote access rate (measured in bytes per
second), where N is the number of DLCIs to which the broadcast must be
replicated

1/4 the local access rate (measured in bytes per second)

The packet rate is not critical if the byte rate is set conservatively.
In general, the packet rate should be set assuming 250-byte packets. The
defaults for the serial interfaces are 64 queue size, 256,000 bytes per second
(2,048,000 bps), and 36 pps. The defaults for the High Speed Serial Interfaces
(HSSI) are 256 queue size, 1,024,000 bytes per second (8,192,000 bps), and 144
pps.

Traffic above the maximum speed is buffered in a traffic shaping queue
which is equal to the size of the weighted fair queue (WFQ). The Token Bucket
filter does not filter traffic, but controls the rate at which traffic is sent
on the outbound interface. For more information on token bucket filters, please
see the Policing and
Shaping Overview.

The maximum number of bits per second that an end station can transmit
into the network is bounded by the access rate of the user-network interface.
The line speed of the user network connection limits the access rate. You can
establish this in your subscription to the service provider.

The maximum committed amount of data you can offer to the network is
defined as Bc. Bc is a measure for the volume of data for which the network
guarantees message delivery under normal conditions. It is measured during the
committed rate Tc.

The number of non-committed bits (outside of CIR) that are still
accepted by the Frame Relay switch but are marked as eligible to be discarded
(DE).

The token bucket is a 'virtual' buffer. It contains a number of tokens,
enabling you to send a limited amount of data per time interval. The token
bucket is filled with Bc bits per Tc. The maximum size of the bucket is Bc +
Be. If the Be is very big and, if at T0 the bucket is filled with Bc + Be
tokens, you can send Bc + Be bits at the access rate. This is not limited by Tc
but by the time it takes to send the Be. This is a function of the access rate.

The CIR is the allowed amount of data which the network is committed to
transfer under normal conditions. The rate is averaged over a increment of time
Tc. The CIR is also referred to as the minimum acceptable throughput. Bc and Be
are expressed in bits, Tc in seconds, and the access rate and CIR in bits per
second.

Bc, Be, Tc and CIR are defined per data-link connection identifier
(DLCI). Due to this, the token bucket filter controls the rate per DLCI. The
access rate is valid per user-network interface. For Bc, Be and CIR incoming
and outgoing values can be distinguished. If the connection is symmetrical, the
values in both directions are the same. For permanent virtual circuits, we
define incoming and outgoing Bc, Be and CIR at subscription time.

Peak = DLCI's maximum speed. The bandwidth for that particular DLCI.

Tc = Bc / CIR

Peak = CIR + Be/Tc = CIR (1 + Be/Bc)

If the Tc is one second then:

Peak = CIR + Be = Bc + Be

EIR = Be

In the example we are using here, the router sends traffic between 48
Kbps and 32 Kbps depending on congestion in the network. Networks may mark
frames above Bc with DE but have plenty of spare capacity to transport the
frame. The reverse is also possible: they can have limited capacity, yet
discard excessive frames immediately. Networks may mark frames above Bc + Be
with DE, and possibly transport it, or just drop the frames as suggested by the
International Telecommunication Union Telecommunication Standardization Sector
specification ITU-T I.370. Traffic shaping throttles the traffic based on
backward-explicit congestion notification (BECN) tagged packets from the switch
network. If you receive 50 percent BECN, the router decreases the traffic by
one eighth of the current transmitted bandwidth for that particular DLCI.

The transmitted speed is 42 Kb. The router decreases the speed to 42
minus 42 divided by 8 (42 - 42/8), making 36.75 Kb. If the congestion decreases
after the change, the router reduces the traffic further, dropping to one
eighth of current transmitted bandwidth. The traffic is reduced until it
reaches the configured CIR value. However, the speed can drop under the CIR
when we can still see BECNs. You can specify a bottom limit, such as CIR/2. The
network is no longer congested when all frames received from the network no
longer have a BECN bit for a given time interval. 200 ms is the default value
for this interval.

The Generic traffic shaping feature is a media and
encapsulation-independent traffic shaping tool that helps reduce the flow of
outbound traffic when there is congestion within the cloud, on the link, or at
the receiving endpoint router. We can set it on interfaces or subinterfaces
within a router.

Generic traffic shaping is useful in the following situations:

When you have a network topology that consists of a high-speed (T1
line speed) connection at the central site and low speed (less than 56 kbps)
connections at the branch or telecommuter sites. Because of the speed mismatch,
a bottleneck often exists for traffic on the branch or telecommuter sites when
the central site sends data at a faster rate that the remote sites can receive.
This results in a bottleneck in the last switch before the remote-point router.

If you are a service provider that offers sub-rate services, this
feature enables you to use the router to partition your T1 or T3 links, for
example, into smaller channels. You can configure each subinterface with a
token filter bucket that matches the service ordered by a customer.

On your Frame Relay connection, you may want the router to throttle
traffic instead of sending it into the network. Throttling the traffic would
limit packet loss in the service provider's cloud. The BECN-based throttling
capability provided with this feature allows you to have the router dynamically
throttle traffic based on receiving BECN tagged packets from the network. This
throttling holds packets in the router's buffers to reduce the data flow from
the router into the Frame Relay network. The router throttles traffic on a
subinterface basis, and the rate is also increased when fewer BECN-tagged
packets are received.

To configure a Frame Relay subinterface to estimate the available
bandwidth when it receives BECNs, use the traffic-shape
adaptive command.

Note: You must enable traffic shaping on the interface with the
traffic-shape rate command before you can use the
traffic-shape adaptive command.

The bit rate specified for the traffic-shape
rate command is the upper limit, and the bit rate specified for
the traffic-shape adaptive command is the lower
limit (usually the CIR value) at which traffic is shaped when the interface
receives BECNs. The rate actually used is normally between these two rates. You
should configure the traffic-shape adaptive command
at both ends of the link, as it also configures the device at the flow end to
reflect forward explicit congestion notification (FECN) signals as BECNs. This
enables the router at the high-speed end to detect and adapt to congestion even
when traffic is flowing primarily in one direction.

The following example configures traffic shaping on interface 0.1 with
an upper limit (usually Bc + Be) of 128 kbps and a lower limit of 64 kbps. This
allows the link to run from 64 to 128 kbps, depending on the congestion level.
If the central side has a upper limit of 256 kbps, you should use the lowest
upper limit value.

With generic traffic shaping you can only specify one peak rate (upper
limit) per physical interface and one CIR (lower limit) value per subinterface.
With Frame Relay traffic shaping, you start a token bucket filter per Virtual
Circuit.

The traffic shaping over Frame Relay feature provides the following
capabilities:

Rate enforcement on a per-VC basis: You can configure a peak rate to
limit outbound traffic to either the CIR or some other defined value such as
the excess information rate (EIR).

Generalized BECN support on a per-VC basis: The router can monitor
BECNs and throttle traffic based on BECN-marked packet feedback from the Frame
Relay network.

Priority queuing (PQ), custom queuing (CQ) or WFQ support at the VC
level. This allows for finer granularity in the prioritisation and queuing of
traffic, giving you more control over the traffic flow on an individual VC. The
traffic shaping over Frame Relay feature applies to Frame Relay permanent
virtual circuits (PVCs) and switched virtual circuits (SVCs).

This command shows the status of the permanent virtual circuit (PVC),
packets in and out, dropped packets if there is congestion on the line via
forward explicit congestion notification (FECN) and backward explicit
congestion notification (BECN), and so on. For a detailed description of the
fields used with the show frame-relay pvc command,
click here.

If you have the output of a show frame-relay
pvc command from your Cisco device, you can use
Output Interpreter
(registered customers only)
to display potential issues and
fixes.

Use this command to determine if frame-relay
inverse-arp resolved a remote IP address to a local DLCI. This
command is not enabled for point-to-point subinterfaces. It is useful for
multipoint interfaces and subinterfaces only. Sample output is shown below:

Configuration messages called bridge protocol data units (BPDUs) are
used in the spanning-tree protocols supported in Cisco bridges and routers.
These flow at regular intervals between bridges and constitute a significant
amount of traffic because of their frequent occurrence. There are two types of
spanning-tree protocols in transparent bridging. First introduced by the
Digital Equipment Corporation (DEC), the algorithm was subsequently revised by
the IEEE 802 committee and published in the IEEE 802.1d specification. The DEC
Spanning-Tree Protocol issues BPDUs at one-second intervals, while the IEEE
issues BPDUs at two-second intervals. Each packet is 41 bytes, which includes a
35-byte configuration BPDU message, a 2-byte Frame Relay header, 2-byte
Ethertype, and a 2-byte FCS.

This output means you have a problem with the cable, channel service
unit/data service unit (CSU/DSU), or the serial line. You need to troubleshoot
the problem with a loopback test. To do a loopback test, follow the steps
below:

Set the serial line encapsulation to HDLC and keepalive to 10
seconds. To do so, issue the commands encapsulation
hdlc and keepalive 10 under the serial
interface.

Place the CSU/DSU or modem in local loop mode. If the line protocol
comes up when the CSU, DSU or modem is in local loopback mode (indicated by a
"line protocol is up (looped)" message), it suggests that the problem is
occurring beyond the local CSU/DSU. If the status line does not change states,
there is possibly a problem in the router, connecting cable, CSU/DSU or modem.
In most cases, the problem is with the CSU/DSU or modem.

Ping your own IP address with the CSU/DSU or modem looped. There
should not be any misses. An extended ping of 0x0000 is helpful in resolving
line problems since a T1 or E1 derives clock from data and requires a
transition every 8 bits. B8ZS ensures that. A heavy zero data pattern helps to
determine if the transitions are appropriately forced on the trunk. A heavy
ones pattern is used to appropriately simulate a high zero load in case there
is a pair of data inverters in the path. The alternating pattern (0x5555)
represents a "typical" data pattern. If your pings fail or if you get cyclic
redundancy check (CRC) errors, a bit error rate tester (BERT) with an
appropriate analyzer from the telco is needed.

When you are finished testing, make sure you return the
encapsulation to Frame Relay.

This line in the output means that the router is getting a carrier
signal from the CSU/DSU or modem. Check to make sure the Frame Relay provider
has activated their port and that your Local Management Interface (LMI)
settings match. Generally, the Frame Relay switch ignores the data terminal
equipment (DTE) unless it sees the correct LMI (use Cisco's default to "cisco"
LMI). Check to make sure the Cisco router is transmitting data. You will most
likely need to check the line integrity using loop tests at various locations
beginning with the local CSU and working your way out until you get to the
provider's Frame Relay switch. See the previous section for how to perform a
loopback test.

If you did not turn keepalives off, this line of output means that the
router is talking with the Frame Relay provider's switch. You should be seeing
a successful exchange of two-way traffic on the serial interface with no CRC
errors. Keepalives are necessary in Frame Relay because they are the mechanism
that the router uses to "learn" which data-link connection identifiers (DLCIs)
the provider has provisioned. To watch the exchange, you can safely use
debug frame-relay lmi in almost all situations. The
debug frame-relay lmi command generates very few
messages and can provide answers to questions such as:

Is the Cisco Router talking to the local Frame Relay switch?

Is the router getting full LMI status messages for the subscribed
permanent virtual circuits (PVCs) from the Frame Relay provider?

Notice the status of "DLCI 980" in the output above. The possible
values of the status field are explained below:

0x0-Added/inactive means that the switch has this
DLCI programmed but for some reason (such as the other end of this PVC is
down), it is not usable.

0x2-Added/active means the Frame Relay switch has
the DLCI and everything is operational. You can start sending it traffic with
this DLCI in the header.

0x3-0x3 is a combination of an active status (0x2)
and the RNR (or r-bit) that is set (0x1). This means that the switch - or a
particular queue on the switch - for this PVC is backed up, and you stop
transmitting in case frames are spilled.

0x4-Deleted means that the Frame Relay switch
doesn't have this DLCI programmed for the router. But it was programmed at some
point in the past. This could also be caused by the DLCIs being reversed on the
router, or by the PVC being deleted by the telco in the Frame Relay cloud.
Configuring a DLCI (that the switch doesn't have) will show up as a 0x4.

IP split horizon checking is disabled by default for Frame Relay
encapsulation so routing updates will come in and out the same interface. The
routers learn the data-link connection identifiers (DLCIs) they need to use
from the Frame Relay switch via Local Management Interface (LMI) updates. The
routers then use Inverse ARP for the remote IP address and create a mapping of
local DLCIs and their associated remote IP addresses. Additionally, certain
protocols such as AppleTalk, transparent bridging, and IPX cannot be supported
on partially meshed networks because they require "split horizon," in which a
packet received on an interface cannot be transmitted out the same interface,
even if the packet is received and transmitted on different virtual circuits.
Configuring Frame Relay subinterfaces ensures that a single physical interface
is treated as multiple virtual interfaces. This capability allows us to
overcome split horizon rules. Packets received on one virtual interface can now
be forwarded out another virtual interface, even if they are configured on the
same physical interface.

You are not able to ping your own IP address on a multipoint Frame
Relay interface. This is because Frame Relay multipoint (sub)interfaces are
non-broadcast, (unlike Ethernet and point-to-point interfaces High-Level Data
Link Control [HDLC]), and Frame Relay point-to-point sub-interfaces.

Furthermore, you are not able to ping from one spoke to another spoke
in a hub and spoke configuration. This is because there is no mapping for your
own IP address (and none were learned via Inverse ARP). But if you configure a
static map (using the frame-relay map command) for
your own IP address (or one for the remote spoke) to use the local DLCI, you
can then ping your devices.

The broadcast keyword provides two
functions: it forwards broadcasts when multicasting is not enabled, and it
simplifies the configuration of Open Shortest Path First (OSPF) for
non-broadcast networks that use Frame Relay.

The broadcast keyword might also be required
for some routing protocols -- for example, AppleTalk -- that depend on regular
routing table updates, especially when the router at the remote end is waiting
for a routing update packet to arrive before adding the route.

By requiring selection of a designated router, OSPF treats a
non-broadcast, multi-access network such as Frame Relay in much the same way as
it treats a broadcast network. In previous releases, this required manual
assignment in the OSPF configuration using the neighbor interface
router command. When the frame-relay
map command is included in the configuration with the
broadcast keyword, and the ip ospf
network command (with the broadcast
keyword) is configured, there is no need to configure any neighbors manually.
OSPF now automatically runs over the Frame Relay network as a broadcast
network. (See the ip ospf network interface command
for more detail.)

Note: The OSPF broadcast mechanism assumes that IP class D addresses are
never used for regular traffic over Frame Relay.

Once you create a specific type of subinterface, you cannot change it
without a reload. For example, you cannot create a multipoint subinterface
serial0.2, then change it to point-to-point. To change it, you need to either
reload the router or create another subinterface. This is the way the Frame
Relay code works in Cisco IOS® software.

Approximately 1000 DLCIs can be configured on a single physical link,
given a 10-bit address. Because certain DLCIs are reserved
(vendor-implementation-dependent), the maximum is about 1000. The range for a
Cisco LMI is 16-1007. The stated range for ANSI/ITU is 16-992. These are the
DLCIs carrying user-data.

However, when configuring Frame Relay VCs on subinterfaces, you need to
consider a practical limit known as the IDB limit. The total number of
interfaces and subinterfaces per system is limited by the number of interface
descriptor blocks (IDBs) that your version of Cisco IOS supports. An IDB is a
portion of memory that holds information about the interface such as counters,
status of the interface, and so on. IOS maintains an IDB for each interface
present on a platform and maintains an IDB for each subinterface. Higher speed
interfaces require more memory than lower speed interfaces. Each platform
contains different amounts of maximum IDBs and these limits may change with
each Cisco IOS release.

The LMI protocol requires that all permanent virtual circuit (PVC)
status reports fit into a single packet and generally limits the number of
DLCIs to less than 800, depending on the maximum transmission unit (MTU) size.

The default MTU on serial interfaces is 1500 bytes, yielding a maximum
of 296 DLCIs per interface. You can increase the MTU to support a larger full
status update message from the Frame Relay switch. If the full status update
message is larger than the interface MTU, the packet is dropped, and the
interface giant counter is incremented. When changing the MTU, ensure the same
value is configured at the remote router and intervening network devices.

Please note that these numbers vary slightly, depending on the LMI
type. The maximum DLCIs per router (not interface) platform guideline, based on
extrapolation from empirical data established on a Cisco 7000 router platform,
are listed below:

A practical DLCI limit also depends on whether the VCs are running a
dynamic or static routing protocol. Dynamic routing protocols, and other
protocols like IPX SAP that exchange database tables, send hellos and
forwarding information messages which must be seen and processed by the CPU. As
a general rule, using static routes will allow you to configure a larger number
of VCs on a single Frame Relay interface.

If you are using subinterfaces, don't put an IP, IPX or AT address on
the main interface. Assign DLCIs to their subinterfaces before you enable the
main interface to ensure that frame-relay
inverse-arp works properly. In case it does malfunction, follow
the steps below:

Turn off Inverse Address Resolution Protocol (ARP) for that DLCI by
using the no frame-relay inverse-arp ip 16 and the
clear frame-relay-inarp commands.

Routing Information Protocol (RIP) updates flow every 30 seconds. Each
RIP packet can contain up to 25 route entries, for a total of 536 bytes; 36
bytes of this total are header information, and each route entry is 20 bytes.
Therefore, if you advertise 1000 routes over a Frame Relay link configured for
50 DLCIs, the result is 1 MB of routing update data every 30 seconds, or 285
kbps of bandwidth consumed. On a T1 link, this bandwidth represents 18.7
percent of the bandwidth, with each update duration being 5.6 seconds. This
amount of overhead is considerable, and it is borderline acceptable, but the
committed information rate (CIR) would have to be in the region of the access
speed. Obviously, anything less than a T1 would incur too much overhead. For
example:

Interior Gateway Routing Protocol (IGRP) updates flow every 90 seconds
(this interval is configurable). Each IGRP packet can contain 104 route
entries, for a total of 1492 bytes, 38 of which are header information, and
each route entry is 14 bytes. If you advertise 1000 routes over a Frame Relay
link configured with 50 DLCIs, the request is approximately 720 KB of routing
update data every 90 seconds, or 64 kbps of bandwidth consumed. On a T1 link,
this bandwidth would represent 4.2 percent of the bandwidth, with each update
duration being 3.7 seconds. This overhead is an acceptable amount:

Routing Table Maintenance Protocol (RTMP) routing updates occur every
10 seconds (this interval is configurable). Each RTMP packet can contain up to
94 extended route entries, for a total of 564 bytes, 23 bytes of header
information, and each route entry is 6 bytes. If you advertise 1000 AppleTalk
networks over a Frame Relay link configured for 50 DLCIs, the result is
approximately 313 KB of RTMP updates every 10 seconds, or 250 kbps of bandwidth
consumed. To remain within an acceptable level of overhead 15 percent or less),
a T1 rate is required. For example:

1000/94 = 11 packets X 23 bytes = 253 header bytes

1000 X 6 = 6000 bytes of route entries

Total = 6253 X 50 DLCIs = 313 KB of RTMP updates every 10 seconds

313,000 / 10 sec X 8 bits = 250 kbps

IPX RIP packet updates occur every 60 seconds (this interval is
configurable). Each IPX RIP packet can contain up to 50 route entries for a
total of 536 bytes, 38 bytes of header information, and each route entry is 8
bytes. If you advertise 1000 IPX routes over a Frame Relay link configured for
50 DLCIs, the result is 536 KB of IPX updates every 60 seconds, or 58.4 kbps of
bandwidth consumed. To remain within an acceptable level of overhead (15
percent or less), a rate of 512 kbps is required. For example:

IPX service access point (SAP) packet updates occur every 60 seconds
(this interval is configurable). Each IPX SAP packet can contain up to seven
advertisement entries for a total of 536 bytes, 38 bytes of header information,
and each advertisement entry is 64 bytes. If you broadcast 1000 IPX
advertisements over a Frame Relay link configured for 50 DLCIs, you would end
up with 536 KB of IPX updates every 60 seconds, or 58.4 kbps of bandwidth
consumed. To remain within an acceptable level of overhead (15 percent or
less), a rate of greater than 2 Mbps is required. Obviously, SAP filtering is
required in this scenario. Compared to all other protocols mentioned in this
section, IPX SAP updates require the most bandwidth:

In some cases, the keepalive on the Cisco device needs to be set
slightly shorter (about 8 seconds) than the keepalive on the switch. You'll see
the need for this if the interface keeps coming up and down.

Serial interfaces, which are by default multipoint, are non-broadcast
media, while point-to-point subinterfaces are broadcast. If you are using
static routes, you can point to either the next hop or the serial subinterface.
For multipoint, you need to point to the next hop. This concept is very
important when doing OSPF over Frame Relay. The router needs to know that this
is a broadcast interface for OSPF to work.

OSPF and multipoint can be very troublesome. OSPF needs a Designated
Router (DR). If you start losing PVCs, some routers may lose connectivity and
try to become a DR even though other routers still see the old DR. This causes
the OSPF process to malfunction.

Overhead associated with OSPF is not as obvious and predictable as that
with traditional distance vector routing protocols. The unpredictability comes
from whether or not the OSPF network links are stable. If all adjacencies to a
Frame Relay router are stable, only neighbor hello packets (keepalives) will
flow, which is comparatively much less overhead than that incurred with a
distance vector protocol (such as RIP and IGRP). If, however, routes
(adjacencies) are unstable, link-state flooding will occur, and bandwidth can
quickly be consumed. OSPF also is very processor-intensive when running the
Dijkstra algorithm, which is used for computing routes.

In earlier releases of Cisco IOS software, special care had to be taken
when configuring OSPF over multiaccess nonbroadcast medias such as Frame Relay,
X.25, and ATM. The OSPF protocol considers these media like any other broadcast
media such as Ethernet. Nonbroadcast multiaccess (NBMA) clouds are typically
built in a hub and spoke topology. PVCs or switched virtual circuits (SVCs) are
laid out in a partial mesh and the physical topology does not provide the
multiaccess that OSPF believes is there. For the case of point-to-point serial
interfaces, OSPF always forms an adjacency between the neighbors. OSPF
adjacencies exchange database information. In order to minimize the amount of
information exchanged on a particular segment, OSPF elects one router to be a
DR, and one router to be a backup designated router (BDR) on each multiaccess
segment. The BDR is elected as a backup mechanism in case the DR goes down.

The idea behind this setup is that routers have a central point of
contact for information exchange. The selection of the DR became an issue
because the DR and BDR needed to have full physical connectivity with all
routers that exist on the cloud. Also, because of the lack of broadcast
capabilities, the DR and BDR needed to have a static list of all other routers
attached to the cloud. This setup is achieved using the
neighbor command:

neighbor ip-address [priority number] [poll-interval
seconds]

In later releases of Cisco IOS software, different methods can be used
to avoid the complications of configuring static neighbors and having specific
routers becoming DRs or BDRs on the nonbroadcast cloud. Which method to use is
influenced by whether the network is new or an existing design that needs
modification.

A subinterface is a logical way of defining an interface. The same
physical interface can be split into multiple logical interfaces, with each
subinterface being defined as point-to-point. This scenario was originally
created in order to better handle issues caused by split horizon over NBMA and
vector based routing protocols.

A point-to-point subinterface has the properties of any physical
point-to-point interface. As far as OSPF is concerned, an adjacency is always
formed over a point-to-point subinterface with no DR or BDR election. OSPF
considers the cloud a set of point-to-point links rather than one multiaccess
network. The only drawback for the point-to-point is that each segment belongs
to a different subnet. This scenario might not be acceptable because some
administrators have already assigned one IP subnet for the whole cloud. Another
workaround is to use IP unnumbered interfaces on the cloud. This scenario also
might be a problem for some administrators who manage the WAN based on IP
addresses of the serial lines.