Augmented reality: Like it or not, only Apple's ready for the data-vomit gush

It's all about AR... and iPhone has the X factor

This month's release by Apple of the iPhone X with FaceID begins the first wave of consumer products designed from the ground up for continuous awareness of space, place and face - crowning a half a century of research in augmented reality destined to fuse our rising sea of data onto the real world.

Over the last few years, Augmented Reality (AR) travelled a hype-curve rollercoaster from the crashing failure of Google's Glass to the soaring heights of Pokémon Go, accelerating into open competition between APIs from Facebook, Apple, Alphabet/Google and Microsoft (hereafter known as FAAM).

Yet the story of AR goes back much further, all the way to the dawn of modern computing. Ivan Sutherland, the Turing Award-winning genius behind Sketchpad - the first truly interactive computer program - followed that up with another modest innovation - a “head mounted three dimensional display.”

Known as the Sword of Damocles because of the way the mechanical tracking apparatus hung down from the ceiling over the user's head, Sutherland's invention used dual CRTs, focused and and reflected through half-silvered mirrors to overlay a wireframe 3D image onto the real world.

Here's where the histories get things wrong: this was never a demo of virtual reality. Sutherland invented augmented reality - virtual reality meant simply running the system with the room lights off.

AR predates VR, because AR had a real-world use case: Sutherland's paymasters at the Defense Advanced Research Projects Agency (DARPA) needed visualisation tools for jet fighter pilots, something to help them manage the overwhelming flood of data generated by those very fastest and most responsive of machines.

Jet fighters have had heads-up displays for nearly 30 years, a now-indispensable element of flight instrumentation. They begin with Sutherland - who also simultaneously invented real-time 3D computer graphics, to have something to render to within the Sword of Damocles. That's a hell of a beginning

Outside the military, AR stagnated, because computers of the time weren't up to the job. NASA's earliest VR systems eschewed real-world integration in favour of easier to generate fully-synthetic worlds. Bringing the real world into the virtual world requires a sophisticated capacity to model the real world, in real-time.

Twenty years ago, aircraft giant Boeing spearheaded one of the the first non-military deployments of AR, at their massive 777 assembly facility just outside Seattle, where technicians used AR to keep track of the countless separate component assembly operations required to fabricate a completed aircraft. Boeing's AR system took what would have been a 10,000+ page manual and projected it into a display headset overlaying the body of aircraft with the necessary technical information for the assembly process, making the process both faster and less error-prone.

Google's Glass wasn't much of an advance on Boeing's work. Like so much else in this world, Glass leveraged smartphone mass-manufacturing to cram smartphone-like components into a different form factor - all of it running a version of Android. Yet it did feature one of the hallmarks of augmented reality - continuous surveillance.

They weren't Glassholes, they were ahead of their time

One of the key points of difference between AR and VR is that AR needs to be fed a continuously updated view of the world around. Without that, AR can't know what to augment, or where to augment it. Google's Glass featured a rather Cylon-like forward facing camera with a bright red “recording” LED that made almost anyone facing it feel as though they'd been scanned by Skynet. Hence the phenomenon of the "glasshole", banned from the local pub and shunned by society. Glass died - not because it wasn’t useful, but because of the "Segway Effect": no one else wanted to be near anyone using one.

After Glass gasped its last (recently reborn as an enterprise tool closer to Boeing's use case than Sergey's) the Great Leap Forward in AR began - spearheaded by Microsoft. Adapting some of the tech from Xbox Kinect into a headset, Microsoft's Hololens represented the first viable commercial implementation of an AR platform, built around “Windows Holographic” - now a core Win10 API renamed “Windows Mixed Reality”, spanning both AR and VR devices.

In Hololens, Moore's Law had finally caught up with Ivan Sutherland: the computing power necessary to create and maintain augmented reality could fit into a bulky-but-not-uncomfortable headset. All those cycles were needed not to create the images, but to manage the core technology of modern AR - Simultaneous Localisation And Mapping, or SLAM.

Why Microsoft's 3D HoloLens goggles aren't for Google Glassholes

SLAM blends an assemblage of computer vision and inertial sensor technologies to create a 3D map of a space. Hololens uses a "depth camera" - mounted above the eyes - to create a "depth map". That's essentially the same technique Kinect used to map a game controller onto a human body, but pointed outward, rather than toward the user. That 3D map can then be used to situate "holograms" or other forms of 3D augmentation within real space. Look away, then look back, and the augmentation remains, because the map is the territory.

There's more than one way to SLAM. Back in 2014, Google launched "Project Tango" developing its own library of smartphone-compatible SLAM techniques that eschewed an expensive depth camera, instead focusing on compute-intensive (but cheaper) computer vision techniques. As a camera scans a space, the difference between two captured frames, combined with data from the inertial sensors on the smartphone, generates the requisite 3D map.

Tango found its way into only two devices in 2016, and as a result attracted only a handful of Android app developers - like "hologram" purveyors 8i .

In July 2016, everything changed. Niantic's tie-up with Nintendo produced Pokémon Go, the fastest selling mobile game of all time, and - in just its first month - US $200,000,000 in revenues, as users greedily purchased in-game content. With that kind of market validation, augmented reality broke through - quickly surpassing a resurgent virtual reality as the new shiny in tech.

With that, the race was on between the FAAM firms to dominate the next new frontier - the real world, augmented.

Zuckerberg led with the charge with a twenty-minute discussion of the future of AR during this year's F8 keynote, touting the smartphone camera as "the new interface", and promising a rich library of AR tools for Facebook's developers, starting with AR Studio.

Like every other major player, Facebook has its own SLAM technology. SLAM is the major barrier to entry for AR, so each of the majors touts their SLAM solution as better than their competitors. Each have different strengths and weaknesses. Facebook has to be cross-platform, working across a range of smartphone cameras and computing capacities, while device manufacturers like Apple and Microsoft can optimise their experiences.

As for Apple, they've been in the AR game for years, purchasing Kinect developers PrimeSense in 2013, and, two years later, AR developer Metaio - presumably for its SLAM software. Although much had been rumoured about a pair of Apple “spectacles” - the mythical sunglass-like idealised interface for augmented reality - very little hardware had been announced to support AR until just a month ago.

At this year's World Wide Developers Conference (WWDC), Apple unveiled ARKit, an iOS library seamlessly providing SLAM and integration with other key iOS services, such as CoreLocation. Suddenly, everything hard about AR became as easy as flipping a few switches in Xcode and copying a few code examples. Apple fanbois noted Apple's SLAM implementation was incredibly good, and very easy to use. A series of eye-popping demos surfaced on YouTube - including, most memorably, a “portal” into a jungle-like dimension from the middle of a city street.

Apple followed that up with the iPhone X and its TrueDepth camera - essentially a Kinect in about one-thousandth the volume, capable of accurately mapping a human face, both for identification purposes, and to produce a stream of real-time motion capture data that can be mapped onto a poo emoji.

The TrueDepth camera is the first real bit of AR technology that hundreds of millions will own - and it's clearly the direction Apple will be taking AR on iOS, with expectations it will show up across the product line within the next few years.

Alphabet ran a search-and-replace over Project Tango (perhaps not quite literally, but fairly obviously), announcing its own Android-based ARCore. It currently works only on Google's Pixels and Samsung's Galaxy S8, while Apple's ARCore works on all iOS devices back to the iPhone 6S. There are a few ARCore demos out, but nothing like the avalanche of ARKit apps that came with the release of iOS 11. Hundreds of ARKit apps were released in the six weeks following the launch of iOS 11, giving Apple an early advantage in the ARwars, as Google tries to spread ARCore to other Android devices.

Delaying a next-generation Hololens until early 2019, Microsoft promises a "3rd generation device" closer to the ideal of nearly weightless sunglasses. Meanwhile, Windows Mixed Reality remains a developer's toolkit primarily geared toward virtual reality apps. At well more than $3,000, Microsoft has sold only a few tens of thousands of its Hololens, not nearly enough to kickstart an ecosystem of AR developers.

Facebook seems to be stuck somewhere between the obvious-to-all-but-itself collapse of its VR efforts, as Oculus struggles to reach even a half million units in sales, and the launch of its AR Studio applications. No one doubts Facebook can push AR capabilities to its billions of mobile users - but when?

If, as Zuck and Tim Cook and Satya Nadella all suggest, AR is the next great computing platform, there will be another generation of winners and losers. Microsoft lost the smartphone wars, came out strong with Hololens, but seems to be lost once again. Google appears hamstrung by platform fragmentation. Only Apple has shown both the technical and marketing chops to make augmented reality an everyday experience in 2017. Right now it's all about iOS 11, ARKit and the half billion devices that support both. That’s a hell of a dent in the universe. ®