Speed matters: how Ethernet went from 3Mbps to 100Gbps… and beyond

In 30 years, Ethernet conquered networking and accelerated from 3Mbps to …

Although watching TV shows from the 1970s suggests otherwise, the era wasn't completely devoid of all things resembling modern communication systems. Sure, the 50Kbps modems that the ARPANET ran on were the size of refrigerators, and the widely used Bell 103 modems only transferred 300 bits per second. But long distance digital communication was common enough, relative to the number of computers deployed. Terminals could also be hooked up to mainframe and minicomputers over relatively short distances with simple serial lines or with more complex multidrop systems. This was all well known; what was new in the '70s was the local area network (LAN). But how to connect all these machines?

The point of a LAN is to connect many more than just two systems, so a simple cable back and forth doesn't get the job done. Connecting several thousands of computers to a LAN can in theory be done using a star, a ring, or a bus topology. A star is obvious enough: every computer is connected to some central point. A bus consists of a single, long cable that computers connect to along its run. With a ring, a cable runs from the first computer to the second, from there to the third and so on until all participating systems are connected, and then the last is connected to the first, completing the ring.

In practice, things aren't so simple. Token Ring is a LAN technology that uses a ring topology, but you wouldn't know it by looking at the network cabling, because computers are hooked up to concentrators (similar to today's Ethernet switches). However, the cable does in fact form a ring, and Token Ring uses a somewhat complex token passing system to determine which computer gets to send a packet at which time. A token circles the ring, and the system in possession of the token gets to transmit. Token Bus uses a physical bus topology, but also uses a token-passing scheme to arbitrate access to the bus. A token network's complexity makes it vulnerable to a number of failure modes, but such networks do have the advantage that performance is deterministic; it can be calculated precisely in advance, which is important in certain applications.

But in the end it was Ethernet that won the battle for LAN standardization through a combination of standards body politics and a clever, minimalist—and thus cheap to implement—design. It went on to obliterate the competition by seeking out and assimilating higher bitrate protocols and adding their technological distinctiveness to its own. Decades later, it had become ubiquitous.

If you've ever looked at the network cable protruding from your computer and wondered how Ethernet got started, how it has lasted so long, and how it works, wonder no more: here's the story.

Brought to you by Xerox PARC

Ethernet was invented by Bob Metcalfe and others at Xerox's Palo Alto Research Center in the mid-1970s. PARC's experimental Ethernet ran at 3Mbps, a "convenient data transfer rate [...] well below that of the computer's path to main memory," so packets wouldn't have to be buffered in Ethernet interfaces. The name comes from the luminiferous ether that was at one point thought to be the medium through which electromagnetic waves propagate, like sound waves propagate through air.

Ethernet used its cabling as radio "ether" by simply broadcasting packets over a thick coaxial line. Computers were connected to the Ethernet cable through "taps," where a hole is punched through the coax cladding and the outer conductor so a connection can be made to the inner conductor. The two ends of the coax cable—branching is not allowed—are fitted with terminating resistors that regulate the electrical properties of the cable so signals propagate throughout the length of the cable but don't reflect back. All computers see all packets pass by, but the Ethernet interface ignores packets that aren't addressed to the local computer or the broadcast address, so the software only has to process packets targeted at the receiving computer.

Other LAN technologies use extensive mechanisms to arbitrate access to the shared communication medium. Not Ethernet. I'm tempted to use the expression "the lunatics run the asylum," but that would be unfair to the clever distributed control mechanism developed at PARC. I'm sure that the mainframe and minicomputer makers of the era thought the asylum analogy wasn't far off, though.

Ethernet's media access control (MAC) procedures, known as "Carrier Sense Multiple Access with Collision Detect" (CSMA/CD), are based on ALOHAnet. This was a radio network between several Hawaiian islands set up in the early 1970s, where all the remote transmitters used the same frequency. Stations transmitted whenever they liked. Obviously, two of them might transmit at the same time, interfering with each other so both transmissions were lost.

To fix the problem, the central location acknowledges a packet if it was received correctly. If the sender doesn't see an acknowledgment, it tries to send the same packet again a little later. When a collision occurs because two stations transmit at the same time, the retransmissions make sure that the data gets across eventually.

Ethernet improves on ALOHAnet in several ways. First of all, Ethernet stations check to see if the ether is idle (carrier sense) and wait if they sense a signal. Second, once transmitting over the shared medium (multiple access), Ethernet stations check for interference by comparing the signal on the wire to the signal they're trying to send. If the two don't match, there must be a collision (collision detect). In that case, the transmission is broken off. Just to make sure that the source of the interfering transmission also detects a collision, upon detecting a collision, a station sends a "jam" signal for 32 bit times.

Both sides now know their transmission failed, so they start retransmission attempts using an exponential backoff procedure. On the one hand, it would be nice to retransmit as soon as possible to avoid wasting valuable bandwidth, but on the other hand, immediately having another collision defeats the purpose. So each Ethernet station maintains a maximum backoff time, counted as an integer value that is multiplied by the time it takes to transmit 512 bits. When a packet is successfully transmitted, the maximum backoff time is set to one. When a collision occurs, the maximum backoff time is doubled until it reaches 1024. The Ethernet system then selects an actual backoff time that's a random number below the maximum backoff time.

For instance, after the first collision, the maximum backoff time is 2, making the choices for the actual backoff time 0 and 1. Obviously, if two systems both select 0 or both select 1, which will happen 50 percent of the time, there is another collision. The maximum backoff then becomes 4 and the chances of another collision go down to 25 percent for two stations wanting to transmit. After 16 successive collisions, an Ethernet system gives up and throws away the packet.

There used to be a lot of fear, uncertainty, and doubt surrounding the performance impact of collisions. But in practice they're detected very quickly and the colliding transmissions are broken off. So collisions don't waste much time, and CSMA/CD Ethernet performance under load is actually quite good: in their paper from 1976 describing the experimental 3Mbps Ethernet, Bob Metcalfe and David Boggs showed that for packets of 500 bytes and larger, more than 95 percent of the network's capacity is used for successful transmissions, even if 256 computers all continuously have data to transmit. Pretty clever.

Standardization

In the late 1970s, Ethernet was owned by Xerox. But Xerox preferred owning a small piece of a large pie rather than all of a small pie, and it got together with Digital and Intel. As the DIX consortium, they created an open (or at least multi-vendor) 10Mbps Ethernet specification and then quickly ironed out some bugs, producing the DIX Ethernet 2.0 specification.

Then the Institute of Electrical and Electronics Engineers (IEEE) got into the game. Eventually, it produced standard 802.3, which is now considered the official Ethernet standard—although the IEEE carefully avoids using the word "Ethernet" lest it be accused of endorsing any particular vendor. (DIX 2.0 and IEEE 802.3 are fully compatible, except for one thing: the layout and meaning of the Ethernet header fields.)

Even right at the beginning, engineers realized that having a single cable snaking through a building was limiting, to say the least. Simply branching the thick coaxial cable wasn't possible; that would do bad things to the data signals. The solution was having repeaters. These regenerate the signal and make it possible to connect two or more Ethernet cables or segments.

The 9.5mm thick coaxial cable also wasn't the easiest type of cabling to work with. For instance, I once saw two telecom company guys hammer on a couple of thick coax cables that went through a wall in order to bend the cables downward. This took them the better part of an hour. Another one told me that he keeps a nice big piece of the stuff in his car: "If the police find a baseball bat in your car they call it a weapon, but a piece of coax works just as well in a fight and the police never give me any trouble."

Although less thug-repellant, thin coax is much easier to use. These cables are half as thin as thick ethernet and look a lot like TV antenna cable. Thin coax does away with the "vampire taps" that allow new stations to attach anywhere to a thick coax segment. Instead, thin cables end in BNC connectors and computers are attached through T-connectors. The big disadvantage of thin coax Ethernet segments is that if the cable gets interrupted somewhere, the whole network segment goes down. This happens when a new system is connected to the network, but it also happens often by accident, as coax loops have to run past every computer. There had to be a better way.

In the late 1980s, a new specification was developed to allow Ethernet to run over unshielded twisted pair cabling—in other words, phone wiring. UTP cables for Ethernet come as four pairs of thin, twisted cables. The cables can be solid copper or made of thin strands. (The former has better electrical properties; the latter is easier to work with.) UTP cables are outfitted with the now-common RJ45 plastic snap-in connectors. 10Mbps (and 100Mbps) Ethernet over UTP uses only two of the twisted pairs: one for transmitting and one for receiving.

A slight complication to this setup is that every UTP cable is also its own Ethernet segment. So in order to build a LAN with more than two computers, it's necessary to use a multiport repeater, also known as a hub. The hub or repeater simply repeats an incoming signal on all ports and also sends the jam signal to all ports if there's a collision. Complex rules limit the topology and the use of hubs in Ethernet networks, but I'll skip those as I doubt anyone still has interest in building a large scale Ethernet network using repeater hubs.

This setup created its own cabling issues, and they're still with us. Computers use pins 1 and 2 to transmit and pins 3 and 6 to receive, but for hubs and switches, this is the other way around. This means that a computer is connected to a hub using a regular cable, but two computers or two hubs must be connected using "crossover" cables that connect pins 1 and 2 on one side with 3 and 6 on the other side (and vice versa). Interestingly, FireWire, co-developed by Apple, managed to avoid this failure of userfriendliness by simply always requiring a crossover cable.

Still, the end result was a fast and flexible system—so fast, it's still in use. But more speed was needed.

Man, this article took me back to my networking classes in college. And elicited the same response... booooring. (Someone should create a ring topology protocol called bo-ring.) Still, at least I learnt that networking still puts me to sleep.

Anyone who considers thin Ethernet less thug-resistant than thicknet clearly has never been attacked by a thinnet cat-o-nine tails.

6

Yep, just google "cat5 whip" to see what he's talking about.

My coworkers and I will occasionally play a game of grab ass and whip the piss out of one another with our cat6 STP strands. Eventually someone gets in a good lick, everyone lets out a collective "OOOOH!" and the game is off.

Ahh, Good old Ars. I love these totally in-depth geeky articles, they're what drew me in to this site in the first place, and if you guys keep the same kind of quality seen in this one, it'll keep me here.

Rather more on topic, it does seem kind of fascinating that ethernet has lasted so long in a (physical) form relatively unchanged from so long ago. I remember all of us theorizing that we would be on fiber interconnects LOONG before now. I guess cheap and sturdy will always win out against the more exotic in the consumer realm, and perhaps that's the way it should be.

It's a great pictorial and accompanying text on Intel's lab where they test their 10G Ethernet hardware. Most people have no idea what's involved with supporting this speed on twisted-pair copper cables.

The first and last parts were very interesting, as I didn't know much about the original thick coax ethernet or the latest in the push to 100Gb. Page two, however, reads like a rushed first draft. It's a jumbled mess of terms that are barely explained before jumping off on some tangent. Even though I already knew most of the concepts beforehand, I still found it hard comprehend. It's almost as if the first and second pages weren't even written by the same person!

Great article. Does anyone have any idea about the level of Ethernet usage across the Internet. This article mainly talks about Ethernet from a LAN perspective but I had a feeling that it was increasingly being used in the Internet core. Is this true/false?

I wonder how graphene might provide improvements for data over copper cables (or carbon, as the case may be).

Far as I know, graphene is only being used for transistors, not cables. However, it'll probably be very important in creating chips that can handle the signalling speed needed for faster networks in the future.

Think about it: is there any other 30-year-old technology still present in current computers?

Yes: Most of the connectors.Although Molex connectors are effectively a 50 year old tech that finger destroying little 4 pin power fucker is still annoyingly present in most computers (now in it's 4th decade of use in PCs), along with the various fan header variants.On the subject of power, those IEC connectors that we use were standardized in 1970.Additionally: although they have mostly been phased out RS232 (9 pin variant) connectors aren't completely dead yet.The other 2 old connectors that you most like find on many current systems: the 15pin vga socket and the ps/2 keyboard/mouse ones are from 1987 so a good 1/4 century old...

chabig wrote:

I thought the need for crossover cables was long gone. Aren't all ports auto-sensing?

Current Ethernet just uses the old name, but it shares very little with the original Ethernet.The essence of classic Ethernet is about the shared cable and the clever collision detection method.Current "Ethernet" is about fast point-to-point connections and even faster store-and-forward switches and routers.The classic Ethernet is stuck at 100Mbps and cannot be upscaled.What's left of the inheritance is MAC addresses and the (painfully) small size of Ethernet messages.

I thought the need for crossover cables was long gone. Aren't all ports auto-sensing?

You know, you'd think that, but I had a customer call me up saying she'd decabled her switches to clean them and now some of her phones weren't working. Going through some troubleshooting, I asked "are the switches connected with a crossover cable?" and got the response "what's a crossover cable?" Half an hour later she called back and everything was working fine.

Work on Monday should be fun, hopefully I'll find out what caused the broadcast storm that took down our voicemail server yesterday.

I thought the need for crossover cables was long gone. Aren't all ports auto-sensing?

I guess it depends on price and age of equipment involved (one would we surprised how old some of the ethernet gear still in use are). Still, the name dropping of the fruit put me off. Especially as i do not recall firewire ever being considered for networking.

I met Radia Perlman a couple of years back in London, when we both worked at Sun. She's a fascinating woman and very obviously immensely clever, but in a humble, charming and personable way - if you've ever seen Ray Kurzweil talk you know what i'm getting at.

When I asked her about STP, she said that her manager had said it was an impossible problem to solve, but when she thought about it properly, it was 'such a trivial problem', that 'she couldn't believe no one else has thought of it before her'. And then she went onto explain it in great detail.

We need more folks like her who invent and create wealth, rather than litigate!

She's also got a great story about when she met Bill Gates, just after Microsoft payed Sun a whole bunch of cash to make lots of litigation go away (in 2004), and when shaking his hand she commented to him that "we're all friends now" - the look on his face said otherwise.

This was a very good article that covered just about everything important in the evolution of Ethernet speed from repeaters, to hubs, bridges, and routers, as well cabling and signaling...except the very important relationship between signaling speed, minimum frame length, and length of the cable (in classic Ethernet):

Assume that the velocity of propagation on the shared medium is v and the time it takes to transmit a bit is t, essentially the inverse of the signaling frequency. Then v*t*8*1500 represents the minimum length of a frame (in distance). It is important that this length be greater than the length of the cable, accounting for propagation delays through, at least two repeaters. If it isn't then it is possible that two transmitting stations will not detect that their transmissions collided since when the signals pass on the cable they are not in the vicinity of the transmitters. This leads to a fundamental limit on the speed of classic Ethernet. You can increase signaling speed if you decrease the cable length, but then the scope of your LAN is reduced. Or, you can increase signaling speed if you increase the minimum frame size, but then your utilization and latency suffers. That's a big reason why bridges (switches), and routers, are so important to making Ethernet faster. Also, switches increase the aggregate bandwidth allowing for example cable 1 to transmit to cable 2 while simultaneously cable 3 is transmitting to cable 4--and, of course the collision domain is significantly reduced.

Regarding the spanning tree algorithm, I would also add a couple of clarifying points. First, the algorithm is completely distributed, requiring no central coordination--I remember that Radia had complained that the standards group that took over this work completely screwed up the protocol because they weren't satisfied with how quickly the algorithm converged. Second, it is important to note that it is expected and desirable that the bridges form loops for redundancy. When a bridge fails the algorithm will detect the failure and recalculate the spanning tree using the redundant bridges. The first product to utilize this spanning tree technique was the DEC LanBridge 100--Radia worked for DEC at the time.

Overall, a really good piece of writing. Thanks.

BTW, Ars, fix your damned commenting system! I really hated typing this in all over again after your server puked! Error number "1096111306", indeed.

I thought the need for crossover cables was long gone. Aren't all ports auto-sensing?

As an optional feature of 1000BaseT auto-sensing was implemented. Problem is, it's optional. Cisco Line Cards(specifically the 6748) don't do auto-sensing ...

It is optional, but folks need to remember that 802.3ab (1000BaseT) is the first popular UTP Ethernet to use all 4 pairs of Cat5e cables - many installs did not care how the unused pairs were punched down, so auto-crossover (Auto-MDIX) was born.

Iljitsch van Beijnum / Iljitsch is a contributing writer at Ars Technica, where he contributes articles about network protocols as well as Apple topics. He is currently finishing his Ph.D work at the telematics department at Universidad Carlos III de Madrid (UC3M) in Spain.