Gadgets and Computers

This is a common theme in many classes of device: you start with a product that has a few electronic functions added, and then those functions are delivered with chips, and perhaps they gain an interface and then a screen, and more and more functions (and probably multi-function buttons) — and then, somehow, you’ve built a little weird custom computer without actually meaning to, and all the little silos of features and functions become unmanageable, both at an interface level and also at a fundamental engineering level, and the whole thing gets replaced by a real computer with a real software platform. And this new computer is almost certainly made by a different company.

You could see this problem very clearly at Motorola, which developed as many as two dozen ‘operating systems’ — for phones, pagers, satellite phones, car-control, industrial devices, chip evaluation boards and so on and so on, and picked them for each device out of a metaphorical parts bin just as you’d choose a sensor or battery or any other component. And boy, they really knew how to write operating systems — they had dozens! With, probably, ‘millions of lines of code’. This was exactly the right approach in 1995, but in 2005, again, the whole thing collapsed under its own weight, because they needed software as a platform rather than as a one-off component, and instead they had a mess.

The iPhone was the first mainstream cell phone that was also a proper computer. It had a full-fledged operating system and a (mostly) open developer platform. We are likely seeing the same pattern play out across the next generation of computers: not only cars, but drones, IoT devices, wearables, etc. In the beginning, hardware-focused companies make gadgets with ever increasing laundry lists of features. Then a company with strong software expertise (often a new market entrant) comes along that replaces these feature-packed gadgets with full-fledged computers. These computers have proper (usually Unix-like) operating systems, open developer platforms, and streamlined user interfaces (increasingly, powered by AI).

This process takes time to play out. Apple waited more than a decade from the initial popularity of cell phones to the release of the first iPhone. And sometimes you don’t know the significance of a new computing device until many years later. It wasn’t obvious until around 2012 that iOS and Android smartphones would become the dominant form of computing (recall Facebook’s “pivot to mobile” in 2012). Some people (including me) believe we’ve already entered the “computer phase” of consumer IoT with voice assistants like Alexa, but it will probably take years before we understand the enduring mainstream appeal of these devices.