Month: September 2017

I’ve been think­ing about the ‘notch’ in the iPhone X. In case you’ve no idea what I’m talk­ing about, the X has an ‘all-screen’ design; the home but­ton is gone, and the front of the device no longer has bezels above and below the screen except for a curv­ing indent at the top which holds image sen­sors nec­es­sary for the cam­era and the new facial authen­ti­ca­tion fea­ture.

It seems some­how like a design com­pro­mise; the sen­sors are of course nec­es­sary, but it feels like there could have been a full-width nar­row bezel at the top of the device rather than the slight­ly odd notch that requires spe­cial design con­sid­er­a­tion.

But my thought was: if they chose a full-width bezel, what would make the iPhone dis­tinc­tive? Put one on the table face-up next to, say, a new LG or Sam­sung Galaxy phone, how could you tell, at a glance, which was the iPhone?

The iPhone’s sin­gle but­ton design is so dis­tinc­tive that it’s become the de fac­to icon for smart­phones. With­out it, the phone looks like every oth­er mod­ern smart­phone (until you pick it up or unlock it). The notch gives the X a unique look that con­tin­ues to make it unmis­tak­ably an Apple prod­uct, even with the full-device screen. It makes it dis­tinc­tive enough to be icon­ic, and to pro­tect legally—given Apple’s liti­gious his­to­ry, not a small con­sid­er­a­tion.

Of course it requires more work from app design­ers and devel­op­ers to make their prod­ucts look good, but Apple is one of the few (per­haps only) com­pa­nies with enough clout, and a devot­ed fol­low­ing, to put in the extra work—you can’t imag­ine LG being able to con­vince Android app mak­ers to put in the extra shift in that way. So per­haps its still some­what of a design kludge, but it’s a kludge with pur­pose.

Twit­ter is awash with impres­sive demos of aug­ment­ed real­i­ty using Apple’s ARK­it or Google’s ARCore. I think it’s cool that there’s a pal­pa­ble sense of excite­ment around AR—I’m pret­ty excit­ed about it myself—but I think that there’s per­haps a lit­tle too much ear­ly hype, and that what the demos don’t show is per­haps more sug­ges­tive of the gen­uine­ly excit­ing future of AR.

Below is an exam­ple of the demos I’m talk­ing about — a mock­up of an AR menu that shows each of the indi­vid­ual dish­es as a ren­dered 3D mod­el, dig­i­tal­ly placed into the envi­ron­ment (and I want to make clear I’m gen­uine­ly not pick­ing on this, just using it as an illus­tra­tion):

This rais­es a few ques­tions, not least around deliv­ery. As a cus­tomer of this restau­rant, how do I access these mod­els? Do I have to down­load an app for the restau­rant? Is it a WebAR expe­ri­ence that I see by fol­low­ing a URL?

There’s so much still to be defined about future AR plat­forms. Ben Evans’ post, The First Decade of Aug­ment­ed Real­i­ty, grap­ples with a lot of the issues of how AR con­tent will be deliv­ered and accessed:

Do I stand out­side a restau­rant and say ‘Hey Foursquare, is this any good?’ or does the device’s OS do that auto­mat­i­cal­ly? How is this bro­kered — by the OS, the ser­vices that you’ve added or by a sin­gle ‘Google Brain’ in the cloud?

The demo also rais­es impor­tant ques­tions about util­i­ty; for exam­ple, why is see­ing a 3D mod­el of your food on a table bet­ter than see­ing a 3D mod­el in the web page you vis­it, or the app you down­load? Or, why is it bet­ter even than see­ing a reg­u­lar pho­to, or just read­ing the descrip­tion on the menu? Do you get more infor­ma­tion from see­ing a mod­el in AR than from any oth­er medi­um?

Again, I’m not set­ting out to crit­i­cise the demos; I think exper­i­men­ta­tion is crit­i­cal to the devel­op­ment of a new technology—even if, as Mies­nieks points out in a sep­a­rate essay, a lot of this exper­i­men­ta­tion has already hap­pened before…

I’m see­ing lots of ARK­it demos that I saw 4 years ago built on Vufo­ria and 4 years before that on Layar. Devel­op­ers are re-learn­ing the same lessons, but at much greater scale.

But plac­ing 3D objects into phys­i­cal scenes is just one nar­row facet of the greater poten­tial of AR. When we can extract spa­cial data and infor­ma­tion from an image, and also manip­u­late that image dig­i­tal­ly, aug­ment­ed real­i­ty becomes some­thing much more inter­est­ing.

In Matthew Panzarino’s review of the new iPhones he talks about the Por­trait Light­ing feature—which uses machine learn­ing smarts to cre­ate stu­dio-style photography—as aug­ment­ed real­i­ty. And it is.

AR isn’t just putting a vir­tu­al bird on it or drop­ping an Ikea couch into your liv­ing room. It’s alter­ing the fab­ric of real­i­ty to enhance, remove or aug­ment it.

The AR demos we’re see­ing now are fun and some­times impres­sive, but my intu­ition is that they’re not real­ly rep­re­sen­ta­tive of what AR will even­tu­al­ly be, and there are going to be a few inter­est­ing years until we start to see that revealed.