Cutting the cord: how the world’s engineers built Wi-Fi

Wireless networking has exploded over the last 15 years, but how do our …

In the 1980s, even before connectivity to the Internet became commonplace, people realized that connecting a group of computers together in a local area network (LAN) made those computers much more useful. Any user could then print to shared printers, store files on file servers, send electronic mail, and more. A decade later, the Internet revolution swept the world and LANs became the on-ramp to the information superhighway. The LAN technology of choice was almost universally Ethernet, which is terrific apart from one big downside: those pesky wires.

In the late 1990s, the Institute of Electrical and Electronics Engineers (IEEE) solved that problem with their 802.11 standard, which specified a protocol for creating wireless LANs. If ever the expression "easier said than done" applied, it was here. Huge challenges have been overcome in the past 15 years to get us to the point where reasonably reliable, fast, and secure wireless LAN equipment can today be deployed by anyone, and where every laptop comes with built-in Wi-Fi. But overcome they were—and here's how.

Early Aloha

The journey started back in the early 1970s. The University of Hawaii had facilities scattered around different islands, but the computers were located at the main campus in Honolulu. Back then, computers weren't all that portable, but it was still possible to connect to those computers from remote locations by way of a terminal and a telephone connection at the blazing speed of 300 to 1200 bits per second. But the telephone connection was both slow and unreliable.

A small group of networking pioneers led by Norman Abramson felt that they could design a better system to connect their remote terminals to the university's central computing facilities. The basic idea, later developed into "AlohaNET," was to use radio communications to transmit the data from the terminals on the remote islands to the central computers and back again. In those days, the well-established approach to sharing radio resources among several stations was to divide the channel either into time slots or into frequency bands, then assign a slot or band to each of the stations. (These two approaches are called time division multiple access [TDMA] and frequency division multiple access [FDMA], respectively.)

Obviously, dividing the initial channel into smaller, fixed-size slots or channels results in several lower-speed channels, so the AlohaNET creators came up with a different system to share the radio bandwidth. AlohaNET was designed with only two high-speed UHF channels: one downlink (from Honolulu) and one uplink (to Honolulu). The uplink channel was to be shared by all the remote locations to transmit to Honolulu. To avoid slicing and dicing into smaller slots or channels, the full channel capacity was available to everyone. But this created the possibility that two remote stations transmit at the same time, making both transmissions impossible to decode in Honolulu. Transmissions might fail, just like any surfer might fall off her board while riding a wave. But hey, nothing prevents her from trying again. This was the fundamental, ground-breaking advance of AlohaNET, reused in all members of the family of protocols collectively known as "random access protocols."

The random access approach implemented in the AlohaNET represented a paradigm shift from a voice network approach to a data network. The traditional channel sharing techniques (FDMA and TDMA) implied the reservation of a low speed channel for every user. That low speed was enough for voice, and the fact that the channel was reserved was certainly convenient; it prevented the voice call from being abruptly interrupted.

But terminal traffic to the central computers presented very different requirements. For one thing, terminal traffic is bursty. The user issues a command, waits for the computer to process the command, and looks at the data received while pondering a further command. This pattern includes both long silent periods and peak bursts of data.

The burstiness of computer traffic called for a more efficient use of communication resources than what could be provided by either TDMA or FDMA. If each station was assigned a reserved low-speed channel, the transmission of a burst would take a long time. Furthermore, channel resources would be wasted during the long silent periods. The solution was a concept that was implemented by AlohaNET's random access protocol that is central to data networks: statistical multiplexing. A single high speed channel is shared among all the users, but each user only uses it only some of the time. While Alice is carefully examining the output of her program over a cup of tea, Bob could be uploading his data to the central computer for later processing. Later, the roles might be reversed as Alice uploads her new program while Bob is out surfing.

To make this multiplexing work, the team needed a mechanism that would allow the remote stations to learn about the failure of their initial transmission attempt (so that they could try again). This was achieved in an indirect way. Honolulu would immediately transmit through the downlink channel whatever it correctly received from the uplink channel. So if the remote station saw its own transmission echoed back by Honolulu, it knew that everything went well and Honolulu had received the transmission successfully. Otherwise, there must have been a problem, making it a good idea to retransmit the data.

Standards wars

"The wonderful thing about standards is that there are so many of them to choose from." Grace Hopper, as per the UNIX-HATERS Handbook (PDF) page 9 or 49.

By the end of the last century, two standards were competing head to head in the wireless local area network arena. The American alternative, developed by the IEEE, relied on simpler, more straightforward approaches. The alternative, proposed by the European Telecommunications Standards Institute (ETSI), was more sophisticated, featuring higher data rates and traffic prioritization for service differentiation. Vendors favored the easier to implement IEEE alternative, ignoring all optional features.

Obviously, a simpler approach had the advantage of a shorter time-to-market, which was critical for obtaining substantial market share and which paved the way for the ultimate success of the IEEE specification over the one standardized by ETSI. The IEEE 802.11 standard in question belongs to the 802 standard family that also includes IEEE 802.3 (Ethernet). As time passed, the IEEE 802.11 standard was refined to incorporate some features that were present in the early ETSI proposal, such as higher data rates and service differentiation.

I also would have liked to hear about some of the very early non-standard protocols back in the late 90's. I remember cards one could buy that would do 1Mbit or 800kbit or such. All in all, a good article, though.

How disappointing there is not a single mention of the australian csiro scientists who were integral in it's development and fought the biggest technology companies on earth for the rights to it and won? Those poor guys fought hard to be recognised for their work in court and yet the world still forgets about them. It makes me so sad.

Guys, despite your endless championing of the little guy in the face of goliaths, the article never went into any great depth on any of the thousands of individual technological innovations necessary to make wireless work at all - it was a broad overview, not a 200-page rigorous technical document. (Aside from a mention of Lamarr's patent.) You want that, go read the specs; they list pretty much every patent included in the pool, and it might surprise you to find out that it's more than just one Australian patent, important as it might be.

Yes, a mention of CSIRO's work is not necessary considering the context of the article.The title suggests a different context, I.e. the problems and specific engineers who solved them, so I can see why people want CSIRO mentioned.

They solved the problem of being able to "talk" in a "noisy" environment (ie. everywhere) and did it with a 0 error rate. They did it with an algorithm and a "chip" which is still part of all WiFi technology today. It was included into the standard without any acknowledgement and was the key in getting devices to communicate effortlessly. How this does not figure in wireless communications is beyond me. The title says "World", it would be nice if our part of the world got acknowledgement as well.

I remember Jobs demoing the iBook (the first, 'toilet seat', variant, I memory serves me right), and in a dramatic way demonstrating that it was connected wirelessly.I have always wondered if that was the first commercially available laptop with that kind of wireless built in, or if it was just the first Apple laptop with the tech. This must have been fairly soon after the first spec was released.

Like others I'm rather disappointed with the lack of recognition given the CSIRO; it's not like they did anything worthwhile or integral... /sarcasm.

*sigh* Don't we hear enough about patent and copyright lawsuits around here anyway? Yes, CSIRO played a big part, yes, they went to court to be recognized. I dunno, it's old news, and not relevant to the article. It would be like having an article about Windows 8 and starting the discussion of the Apple v Microsoft lawsuits and Xerox v Apple lawsuit back in the 80s over GUI features. Ugh!

The article is about the technology behind Wireless Ethernet, not the people behind the tech (with a very few exceptions). It makes me wonder, sometimes, how great sites even get great writers when this is the kind of reception they receive for their hard work. If you want explicit detail of who's dog died during the creation of 802.11a standards, and what leathery geek fell off a ladder the first time they were tearing down UTP cables that were no longer needed, write your own book about it or find some dusty, boring, 700-page tome that already has every nitty-gritty detail.

Would have been interesting to trace the parallels with digital wireless. Qualcomm's origin iirc was the use of frequency hopping in military radios in the 1980s and then switched to CDMA, which became one of the2G technologies and is essential to all forms of 3G. This exactly mirrored the WiFi world, which started out with frequency hopping (I had my home set up with 1Mbps wireless by maybe 1999, including PCMCIA cards in my laptops) which was still an option in early 802.11 but got eclipsed by spread spectrum.

Now, in 4G phones and 802.11n, both systems are on OFDM which iirc was invented in Canada for HDTV purposes and eventually to WiFi then mobiles (the main problem they had to solve for using OFDM in mobiles was mobility - OFDM was already well proven for things that don't move). And OFDM itself has roots going back to things like the Trailblazer modems around 1991. As you can see from the latest round of chips where one modem does *everything* (2G, 3G, CDMA, TD-CDMA, 4G, BlueTooth, near field, ..) the technology has become a tangled web of great ideas and clever techniques.

Wireless comms is one of the finest applications of clever math. Everything from cryptography to simple monotonous counting is used, and woven into the daily lives of almost everyone on the planet about 30 years from its first military backpacks.

Btw, Bluetooth use frequency/channel hopping. And that is why only recently there have been some serious work from independent parties into how secure the various implementations actually are. This largely thanks to the creation of the Ubertooth.

The article is about the technology behind Wireless Ethernet, not the people behind the tech (with a very few exceptions).

Good call.

Great article too. Would've loved to hear more about "MAC bottlenecking" was overcome, this is pretty new to me.

I think this was already mentioned later in the text. They basically bundled up multiple higher level units to make use of the extra capacity. This means you only need one header and footer for the lot of them. Think of it like putting a bunch of smaller shipments into a single large box because they are all going to the same address. This in turns removes processing overhead because only one address label needs to be processed.

No mention of the actress Hedy Lamarr? Her and her neighbor composer friend created a system using the 88 keys of a piano during the war to send encrypted messages. This "frequency-hopping" technique was later used in the creation of wifi as we know it, she was even credited with its discovery.

No mention of the actress Hedy Lamarr? Her and her neighbor composer friend created a system using the 88 keys of a piano during the war to send encrypted messages. This "frequency-hopping" technique was later used in the creation of wifi as we know it, she was even credited with its discovery.

Hedy Lamarr and the "frequency-hopping" was mentioned.... suggest a re-read, or reading comp/retention training.

No mention of the actress Hedy Lamarr? Her and her neighbor composer friend created a system using the 88 keys of a piano during the war to send encrypted messages. This "frequency-hopping" technique was later used in the creation of wifi as we know it, she was even credited with its discovery.

Hedy Lamarr and the "frequency-hopping" was mentioned.... suggest a re-read, or reading comp/retention training.

Edit: Page 2 under the misleading header, "Frequency Hopping"

Thanks for the uncalled-for snarky reply. I was actually writing code on another rig and was just glancing at the article.

No mention of the actress Hedy Lamarr? Her and her neighbor composer friend created a system using the 88 keys of a piano during the war to send encrypted messages. This "frequency-hopping" technique was later used in the creation of wifi as we know it, she was even credited with its discovery.

Hedy Lamarr and the "frequency-hopping" was mentioned.... suggest a re-read, or reading comp/retention training.

Edit: Page 2 under the misleading header, "Frequency Hopping"

Thanks for the uncalled-for snarky reply. I was actually writing code on another rig and was just glancing at the article.

Sorry you took offense, but perhaps before you comment on a missing piece of information, I don't know, maybe check to see if the information is in fact "missing".

Really good article! I am currently at a tech school for the airforce as I start my career in the Radio Frequency Transmission field. It was really cool to see all of the terms we've learned about (e.g. FDMA, CDMA...) in an actual use beyond the classroom. I spaced put a couple of times because I'm in the chowhall, but I hope to re-read this to get a better understanding of it all.

Iljitsch van Beijnum / Iljitsch is a contributing writer at Ars Technica, where he contributes articles about network protocols as well as Apple topics. He is currently finishing his Ph.D work at the telematics department at Universidad Carlos III de Madrid (UC3M) in Spain.