As we talked about in our last discussion, it’s really amazing that the Internet functions as well as it does. Anyone with access on one side of the planet can instantly send information to someone on the other side, including email, voice, video, and many other types of data. Ever wonder how it all manages to work so well most of the time? The answer, which is not usually covered in CCENT/CCNA discussions, is the Border Gateway Protocol, currently at version 4.

When I first started reading up on networking protocols, I was impressed at how OSPF could communicate vast amounts of accurate reachability information to devices in its domain. The problem, though, is that comparing even a large, 2000-node network, for example, is far different from the Internet. Communicate the intricate details about every conceivable route (including host routes) would make Internet routers capable of cooking eggs from the sheer heat, because of the massive routing tables and computations involved.

This is where we can talk about an important but sometimes overused networking term—scalable. Scalability is the ability for something to grow in a controlled, measured fashion, rather than haphazardly or too rapidly. To allow the Internet to be scalable, routing has to be simplified in some form or fashion, and this is the unique ability of BGP as an exterior gateway protocol (between autonomous systems, namely devices under a common administration).

Routing across the Internet is a lot like the old line you hear from military personnel in most movies—“That information is on a need-to-know basis” (and usually implies that the person being told that does not need to know). For example, the Internet Service Provider that I first worked at had a block or range of Class C addresses from 216.145.0.0 to 216.145.31.255, and were allocated to various customers that they serviced. Advertising out all 31 separate routes, or even worse, even smaller subnets, would have created a minimum of 32 entries if not more. Instead, because of the beauty of BGP, they advertised a single entry, 216.145.0.0/19. This greatly reduced the size of the potential routing table, and if you multiply that out across the world, you can see why this works so well.

Another important concept is peering, where service providers interconnect and exchange their routes. Originally this took place at public exchange points, but companies like AT&T and Verizon connect directly to one another to accomplish that, called private peering. Most of these entities also have rules about the sizes of network advertisements that they will accept as well.

Next time we will start exploring the newest version of the Internet Protocol, IP Version 6!

As promised, now we want to consider the most extensive WAN ever created–the Internet! Begun as a public project, the Internet is not actually a single monolithic network, but rather a collection of INTER-Connected NETworks (notice the way the terms break out). If you think about all of the logistics involved with integrating the various components, geographies, devices, and access methods, it’s a wonder that it even works at all! When I started in the networking business back in 1998, the hierarchy was much simpler, with large backbone carriers (AT&T, UUNET/Verizon, etc.), and regional providers, with smaller Internet Service Providers connecting to smaller customers. Needless to say, a lot has happened with the public Internet, with many more developments on the horizon.

In the “old days” there were very limited ways to access the Internet, most often through dedicated access, or through dial-up. Today there are many ways to get to Internet resources, including cellular (typically 3G), 4G, DSL, cable, and so forth. For the purpose of simplicity, we will narrow the types of access to the four most common types, as follows:

1. Dedicated Internet Access: Probably still the “gold standard”of Internet access, dedicated access uses private-line connectivity of some type between the customer location and the provider’s Point of Presence (POP). If you remember our discussion about private lines, this involved a telecommunications circuit, usually terminated by a CSU/DSU and can have speeds from T1/E1 up to insanely huge optical connections.

2. Dial-Up: Archaic by today’s standards, dial-up networking once ruled the information world, with big names like America Online, CompuServe, and Earthlink once considered the “heavyweights.” An analog device called a modem (modulator-demodulator) turned digital information into analog tones from transmission over standard phone lines, usually at very slow speeds. Originally it was terribly slow (I remember 2400 bps being the top speed), but newer techniques helped promised 56 Kbps speeds. Dial-up was eclipsed by the introduction of broadband technologies (considered next), and is difficult to find today.

3. Cable: If you have ever had cable television, you know first-hand the amount of information that is possible to squeeze through that fairly narrow coaxial cable it is known for. Hundreds of channels with specialized content is available at the click of a remote-control button. In many ways, delivering Internet access across this connection is similar to just adding another “channel” of sorts into the lineup. Boasting great speeds, it is a popular option where available.

4. Digital Subscriber Line: For years it was possible only to transmit voice conversations across analog telephone lines, including dial-up networking (which used audio tones for transmission). Of the number of possible frequencies to transmit across an analog line, a fairly narrow amount is used for voice, leaving the rest open for, yes, you guessed it, transmitting data. The benefit of this is being able to support voice calls and data transmission at the same time.

While this covers Internet access, there is much more to say about how the Internet communicates, which we will look at next time!

As you can observe by the evolution of cellular phones and the Internet, technology never stands still; rather, it continues to morph, change, and improve, and often at a dizzying pace. Such was the case with leased lines, especially with the introduction of the personal computer. Even before the advent of the public Internet, business entities needed efficient connectivity between multiple locations, but disliked the mileage-based charges involved with private line based WANs. To complicate matters, these links were not in constant use, meaning that when they sat idle (such as during the night), they were paying for bandwidth they were not using. Well, why not share bandwidth, and make the whole process less costly and more efficient? The solution was packet-switched networks, including such technologies as SMDS, ATM, and of course Frame-Relay; due to becoming obsolete, we will skip over the first two and concentrate just on Frame-Relay.

When I started out in the industry, Frame-Relay was “all the rage” because it solved several problems with private lines right away. First, the only charge for mileage was between the customer premise and the service provider Point of Presence (or POP). Usually this was less than thirty miles (at least in my area), which trimmed the cost substantially. In addition, you no longer had to be restricted to only joining pairs of sites, instead you could put many sites on the network and only consume one physical port on equipment). It also allowed customers to share bandwidth in the network, which also drove down costs. For a time, this was considered cutting-edge connectivity.

Now on to the mechanics of how all of this works. First, connections across the network are logical rather than physical, which is how the flexibility is achieved to begin with. Equipment connects to the service provider, to a logical entry point called a port, sold at speeds in increments of T1 (1.544 Mbps). Every site needs a port to communicate, but the logical point-to-point connections are created using Permanent Virtual Circuits, or PVC’s, and identified using Layer 2 WAN addresses called Data Link Connection Identifiers, abbreviated DLCI. Multiple PVC’s can terminate on a port, such as in the diagram above. Notice that the Denver location has three PVC’s defined between itself and the other three sites, and the DLCI’s identify the specific PVC’s in use. All of this information is sent by service provider equipment using a protocol called the Local Management Interface, which acts as a keepalive mechanism, as well as DLCI and other information.

Frame-Relay is a complex topic with many more nuances than this, but it gives you a good start.

One of the sayings I tend to use when explaining network concepts is a twist on the opening words of the book of Genesis, “In the beginning was the mainframe…” because so many things we take for granted today started with that piece of technology. In terms of WAN connectivity, the original form of connectivity took place across specialized telephone lines that carried data rather than voice conversations. Because the end-user/customer paid the IXC for the exclusive use of the line, they were referred to as private lines or point-to-point leased lines. As mainframes were replaced by personal computers (and networks) these lines connected to devices that convert bits of data to electrical impulses that can be transmitted across these lines for long distances. The technical term for this device is a Channel Service Unit/Data Service Unit, or CSU/DSU, which used to be an external device but are now integrated on interface modules on routers. This is actually a good time to introduce two more terms, namely DCE and DTE. A Data Terminal Equipment (DTE) device is the terminal or end-point sending and receiving information, usually a computer or router. DCE devices, on the other hand, perform the conversion between raw data and the format needed for transmission, such as a CSU/DSU or modem. The acronym stands for either Data Communications Equipment or Data Circuit-Terminating Equipment, depending on the publisher. Cisco prefers the latter term as a general rule.

Private lines utilize electrical circuits to create the pathway between locations, as illustrated in the diagram above; this is in contrast to packet switched networks (which we will deal with later). Depending on the part of the world you live in, there will be differing names and capacities, such as T1 (1.544 Mbps) or E1 (2.048 mbps), and even higher speeds. These are typically copper connections, with very high speeds delivered on fiber optic connections (OC-X). In North America these are the T1/DS1 and T3/DS3 standards, while the rest of the world utilizes the E1/E3 standards. These lines are charged by mileage and often very expensive as a result, although very secure since a customer has private and full-time use of the connection, although if it ever sits idle that ends up being wasted bandwidth. The structure of the framing and line coding ca be rather complex, and something you should definitely be familiar with as you pursue your certification studies.

Next time we will look at packet-switched WANs! Don’t you just love this stuff?

I started my career in the networking industry at a small Internet Service Provider in Seattle, Washington in the United States (this was in the late 1990’s). At that time, most end-users accessed the Internet using dial-up connectivity, and modem speeds topped out at about 28 Kbps (yes, it sounds like ancient history). From there I went to work for AT&T, where I spent the next five or so years assisting Fortune 500 businesses connecting multiple locations together; needless to say, I spent a lot less time dealing with Wide Area Networks than Local Area Networks.

Although I have since mastered LAN’s, I still have a great fondness for WANs and enjoy building labs that simulate them. LANs are suspiciously easy to recognize, first because they use Ethernet switches, but also because they occupy a fairly localized geographical area (hence the term LAN). Wide area networks are also easy to recognize, as they almost universally depend on large telecommunications providers (AT&T, Verizon, British Telecom, etc.) and use an entirely different set of connections to provide services with. A term that can be confusing, however, is that of a MAN or Metropolitan Area Network, and as such, needs some clarification.

The simplest way to differentiate MANs from WANs is to look at geography once again. LANs connect computing devices with a floor, building, or campus, but no further than that. WANs include networks that tie together sites across significant distances, such as nationally or internationally. MANs are networks that lie within a smaller, more specific region; in a sense all MANs are WANs but not all WANs are MANS. Think of the concepts like squares and rectangles; all squares are rectangles but not all rectangles are squares (using the classic shape definitions).

The more technical way of explaining Metropolitan Area Networks involves a little bit of US telecommunications history, specifically when AT&T was broken up in the 1980’s. In the new system following this “divestiture”, new regions were created within states called local Access Transport Areas, or LATAs. Local phone companies (e.g., Bell Atlantic, Pacific Bell, etc.operated in these LATAs, and Inter-Exchange Carriers (IXC’s) created connectivity between these areas; each had to operate separately. In this arrangement, a MAN would be between locations within a LATA, while a WAN would be between them. Usually this would encompass a city and its outlying suburbs and such, hence the term “metropolitan.” With regulatory changes, these distinctions are not nearly as relevant, which explains why the term MAN is far less frequently used.

Even though EIGRP is a simpler protocol in many ways, there are still important concepts to understand, one of which how it chooses routes. As we talked about before, the lowest metric (constrained bandwidth + cumulative delay) wins the contest for the best route, without the somewhat confusing exceptions that exist in OSPF. However, because there are elements of Distance Vector routing in this hybrid protocol, there inevitably have to be some kind of loop prevention mechanisms.

EIGRP uses two forms of the calculated metric to a network to select the best loop-free route, and at first glance they might sound identical. The first value is called the Feasible Distance (FD), which is the complete metric from the router to the destination network. The second value is called the Reported Distance (RD), which is the metric from the point of view of the next-hop router. If a path is loop-free, the value of the RD will be less than the FD, expressed mathematically as RD < FD. Paths that could cause loops will have opposite values, which will make EIGRP discard it. That may sound simple enough, but as you have probably heard on infomercials, “But wait, there’s MORE“!

Routes that meet the RD < FD test (called the Feasible Condition) are held in the EIGRP Topology Table and the best route gets installed in the IP Routing Table. This best route is called the Successor, or Successor Route in EIGRP, and if other equal-cost routes exist they are also flagged as successors and installed in the routing table. One of the truly amazing features of EIGRP, however, has to do with backup routes. In just about every other protocol, if the primary route fails, the entire convergence process for that failed route starts over to select a new one. In EIGRP, though, another loop-free path is held in reserve for immediate use if the successor fails; this route is called the Feasible Successor.

If you have ever watched any type of beauty pageant (Miss America, Miss Universe, etc.) then you have actually seen this in action! After all of the contestants have proceeded through the judging process, one of them is crowned the winner (Successor)! However, the next contest in line, usually called the first runner-up (Feasible Successor) automatically becomes the winner if anything happens to the original victor–no new pageant is required!

That covers the basic principles of IP routing, next time we will start considering Wide Area Networks!

Unlike most of you, I can get lost in my own backyard; imagine how misdirected I can get when I am actually driving! My beautiful wife Brenda (a feisty little redhead) is a walking, talking, nearly-always-right human GPS, which is wonderful except when I am driving alone and trying to get somewhere. Fortunately, in our world of GPS devices and smartphones (complete with Google Maps), I have some recourse for not getting lost. Even so, these devices are not foolproof, as they once told me that a hotel was in the middle of the Potomac River in Washington DC!

These handy little devices are great because they rely on large databases which contain information on mileage, geography, construction, road conditions, etc., when trying to help get you from one place to another. Just as GPS devices use multiple criteria for recommended a route of travel, so EIGRP relies on several different elements in calculating metric for destination networks. This is remarkably different from every other interior routing protocol, which typically relies on a single element for its metric. Here is a breakdown of the five elements of the EIGRP metric:

1. Bandwidth: At first glance you might think that this is identical to the OSPF cost concept, but there are a couple of important differences. While bandwidth does create a cost-like factor (the higher the better), in EIGRP this cost is not cumulative. Instead, it is based on the lowest bandwidth along the path to a destination network. For example, if one path has all 100 Mbps links and another has 100 Mbps links with one 10 Mbps links, the first path will be preferred because the smallest (called constrained) bandwidth is 10 Mbps. This might sound less ideal than a cumulative cost until you think of how backed up a highway gets when narrowed down to one or two lanes!

2. Delay: Unlike bandwidth, this factor is cumulativealong the entire path. The greater the delay, the less desirable the route is because delay is caused by lower bandwidth and/or congestion. Why choose a 4 lane superhighway if the traffic is crawling along at a very slow speed?

3. Reliability: As the name implies, this measures how reliable the route is (0-255)

4. Load: How loaded or saturated the route is (0-255)

5. MTU: The IPv4 Maximum Transmission Unit size.

Keep in mind that only bandwidth and delay are enabled by default for metric calculation, and each element is called a K-Value. Always make sure the K-Values match between neighbors or a relationship will never form.