Chapter 1: The Middle of Moore's Law

Part 3: Ubiquitous Computing

Like many other prescient observations and innovations (Hiltzik, 2000), the researchers at Xerox PARC identified in the 1980s that technology was part of accomplishing social action (Suchman, 1987) and that personal computers were "too complex and hard to use; too demanding of attention; too isolating from other people and activities; and too dominating" (Weiser et al, 1999). They coined the term "ubiquitous computing" to describe their program to develop a range of specialized networked information processing devices to address these issues.

[Footnote: It's interesting to hypothesize how apparent the implications of following Moore's trend on the size, shape and use of computers were, and who first thought of multiple computers distributed in the environment. Accompanying Moore's original 1965 Electronics magazine article is a cartoon by Grant Compton that shows a salesman hawking a handheld computer alongside stands for "notions" and "cosmetics," with well-dressed men and women crowding around him:
The cartoon's joke is that if Moore's plan is followed, eventually computers will be as small, as common, and sold in the same way as universally consumed personal items. It exaggerates the implications of Moore's article for humor--but perhaps it was funny because it pointed to a hope only implicitly acknowledged.]

Xerox PARC's then Chief Technology Officer, Mark Weiser, described these ideas in his 1991 Scientific American article, "The Computer for the 21st Century." In that article, he contrasts the potential of ubicomp technology to portable computers and virtual reality, then the state of the art in popular computer thought:

The idea of integrating computers seamlessly into the world at large runs counter to a number of present-day trends. "Ubiquitous computing" in this context does not just mean computers that can be carried to the beach, jungle or airport. Even the most powerful notebook computer, with access to a worldwide information network, still focuses attention on a single box.

[…]

Perhaps most diametrically opposed to our vision is the notion of "virtual reality," which attempts to make a world inside the computer. […] Although it may have its purpose in allowing people to explore realms otherwise inaccessible […] virtual reality is only a map, not a territory. It excludes desks, offices, other people not wearing goggles and body suits, weather, grass, trees, walks, chance encounters and in general the infinite richness of the universe. Virtual reality focuses an enormous apparatus on simulating the world rather than on invisibly enhancing the world that already exists.

[…]

Most of the computers that participate in embodied virtuality will be invisible in fact as well as in metaphor. Already computers in light switches, thermostats, stereos and ovens help to activate the world. These machines and more will be interconnected in a ubiquitous network.

Whether or not he used the semiconductor industry's price trends in his calculations, his title accurately anticipated the market. The year 1991, when Weiser wrote his article, was still the pre-Web era of the i486. The vision he described, of many small powerful computers, in different sizes, working simultaneously for one person (or a small group) was simply unaffordable. The economics of processors to make it commercially viable would not exist until well into the first decade of the 21st century (and, sadly, some years after Weiser's premature death in 1999).

I estimate that the era he envisioned began in 2005. Technologies typically emerge piecemeal at different times, so 2005 is an arbitrary date [Footnote: Ambient Devices' Ambient Orb, for example, came out in 2002.]. But in 2005, Apple put out the first iPod Shuffle, Adidas launched the adidas_1 shoe (Figure 1-1) and iRobot launched the Roomba Discovery robotic vacuum cleaner. None of those products looked like a traditional computer, not least because none has a screen. Moreover, the Shuffle and Discovery were second-generation products, which implies that the first generation's success justified additional investment, and the adidas_1 was deeply embedded in a traditionally non-technological activity (running).

Also, by 2005, a range of industry factors made possible the efficient development of products that roughly fit Weiser's vision of ubiquitous computing. No longer did the elements—the software, the hardware, and the networks—have to be integrated from scratch, often painfully, as they had been throughout the 1990s. Starting around 2000, several factors pointed to an emergence of ubicomp as a commercial agenda:

CPU technology prices had fallen to the point that information processing had gotten powerful and inexpensive.

The Internet had become familiar, with clear social and commercial benefits outside of the scientific and engineering community.

A number of standard communication and data exchange protocols had been developed and refined through widespread deployment.

Digital telephony was firmly established, and many people were carrying lightweight, network-connected computers, in the form of mobile phones.

Wireless communication had become common, standardized and successful, with millions of access points deployed throughout the world.

Designers spent the first dotcom boom developing a wide range of interactive products and were experienced with interaction design for networked services.

Thus, the information processing technology was there, the networks were there and, most importantly, technological familiarity among designers, developers and businesspeople was there. By 2005, the fruit of their efforts were in stores and—after nearly two decades of anticipation—the era of ubiquitous computing had begun.