Virtual Reality

So Google likely knows what’s inside all of the buildings it has extracted. And as Google gets closer and closer to capturing every building in the world, it’s likely that Google will start highlighting / lighting up buildings related to queries and search results.

This will be really cool when Google’s/Waymo’s self-driving cars have AR displays. One can imagine pointing out the window at a building and being able to see what’s inside.

This piece is originally from Dec 19, 2016, but interesting to revisit as we enter the home stretch of 2017 (and what a year it has been):

In 2017, we will start to see that change. After years of false starts, voice interface will finally creep into the mainstream as more people purchase voice-enabled speakers and other gadgets, and as the tech that powers voice starts to improve. By the following year, Gartner predicts that 30 percent of our interactions with technology will happen through conversations with smart machines.

I have no doubt that we’ll all be using voice-driven computing on an ever increasing basis in the coming years. In our home, we have an Amazon Alexa, 4 Amazon Dots, and most rooms have Hue Smart Bulbs in the light fixtures (oh, and we have the Amazon Dash Wand in case we want to carry Alexa around with us…). I haven’t physically turned on a light in any of our rooms in months. That’s weird. It happened with the stealth of a technology that slowly but surely creeps into your life and rewires your brain the same way the first iPhone changed how I interact with the people I love. We even renamed all of our Alexa devices as “Computer” so that I can finally pretend I’m living on the Starship Enterprise. Once I have a holodeck, I’m never leaving the house.

And perhaps that’s the real trick to seeing this stealth revolution happen in front of our eyes and via our vocal cords… it’s not just voice-driving computing that is going to be the platform of the near future. In other words, voice won’t be the next big platform. There will be a combination of voice AND augmented reality AND artificial intelligence that will power how we communicate with ourselves, our homes, our environments, and the people we love (and perhaps don’t love). In twenty years, will my young son be typing onto a keyboard in the same way I’m doing to compose this post? In ten years, will my 10-year-old daughter be typing onto a keyboard to do her job or express herself?

I highly doubt both. Those computing processes will be driven by a relationship to a device representing an intelligence. Given that, as a species, we adapted to have relational interact with physical cues and vocal exchanges over the last 70 million years, I can’t imagine that a few decades of “typing” radically altered the way we prefer to communicate and exchange information. It’s the reason I’m not an advocate of teaching kids how to type (and I’m a ~90 wpm touch typist).

Voice combined with AI and AR (or whatever we end up calling it… “mixed reality” perhaps?) is the next big platform because these three will fuse into something the same way the web (as an experience) fused with personal computing to fuel the last big platform revolution.

I’m not sure Amazon will be the ultimate winner in the “next platform” wars that it is waging with Google (Google Assistant), Apple (Siri), Facebook (Messenger), and any number of messaging apps and startups that we haven’t heard of yet. However, our future platforms of choice will be very “human” in the same way we lovingly interact with the slab of metal and glass that we all carry around and do the majority of our computing on these days. It’s hard to imagine a world where computers are shrunk to the size of fibers in our clothing and become transparent characters that we interact with to perform whatever we’ll be performing, but the future does not involve a keyboard, a mouse, and a screen of light emitting diodes for most people (I hear you, gamers) and we’ll all see reality in even more differing ways than is currently possible as augmented reality quickly becomes mainstream in the same way that true mobile devices did after the iPhone.

Future generations would look back and be amazed that 21st Century life was so people-centric, he said, especially in fields, such as car driving, where human fallibility put more lives at risk than was necessary.

Bryan Richardson, Android software engineer at stable|kernel, wants you to consider this: what if firefighters could wear a helmet that could essentially see through the walls, indicating the location of a person in distress? What if that device could detect the temperature of a wall? In the near future, the amount of information that will be available through a virtual scan of our immediate environment and projected through a practical, wearable device could be immense.

Call Pokemon Go silly / stupid / trendish / absurd etc. To a certain point the game is incredibly inane. However, it does illustrate the ability of memes and mass fads to still occur in large numbers despite the “fracturing” of broadcast media and the loss of hegemonic culture.

The more immediate question to me, though, is what to do with this newfound cultural zeitgeist around AR? Surely, there will be more copycat games that try to mirror what Pokemon Go, Nintendo, and Niantic have created. Some will be “better” than Pokemon Go. Some will be direct rip offs.

Tech behemoths such as Facebook, Microsoft, Samsung, HTC, and now Google understand the long term implications of AR and are all each working towards internal and public projects to make use of this old but new intense hope and buzz around the idea of using technology to augment our human realities. I say realities because we shouldn’t forget that we experience the world based on photons bouncing off of things and going into our eyeballs through a series of organic lenses that flip them upside down onto the theater screen that is our retina before the retina pushes them through the optic nerve to our frontal cortex where our electrochemical neurons attempt to derive or make meaning from the data and process that back down our spinal cord to the rest of our bodies… there’s lots of room for variations and subjectivity given that we’re all a little different biologically and chemically.

We’re going to see a fast-moving evolution of tools for professions such as physicians, firefighters, and engineers as well as applications in the military and in classrooms etc that will cause some people pause. That always happens whether the new technology is movable type or writing or books or computers or the web.

Games (and porn unfortunately) tend to push us ahead when it comes to these sorts of tech revolutions. That will certainly be the case in terms of augmented reality. Yes, Pokemon Go is silly and people playing it “should get a life.” But remember, the interactions with that game and each other that they are making now will improve the systems of the future and save / improve lives. Also… don’t get me started on what it means to “have a life” given our electrochemical clump of neurons that we all are operating from regardless of our views on objectivity, Jesus, or etiquette.

Courtbot was built with the city of Atlanta in partnership with the Atlanta Committee for Progress to simplify the process of resolving a traffic citation. After receiving a citation, people are often unsure of what to do next. Should they should appear in court, when should they appear, how much will the fine cost, or how can they contend the citation? The default is often to show up at the courthouse and wait in line for hours. Courbot allows the public to find out more information and pay their citations

Merianna and I were just talking about the implications of artificial intelligence and interactions with personal assistants such as my beloved Amy.

The conversation came about after we decided to “quickly” stop by a Verizon store and upgrade her phone (she went with the iPhone SE btw… tiny but impressive). We ended up waiting for 45 mins in a relatively sparse store before being helped with a process that took all of 5 minutes. With a 7 month old baby, that’s not a fun way to spend a lunch hour break.

The AI Assistant Talk

We were in a part of town that we don’t usually visit, so I opened up the Ozlo app on my phone and decided to see what it recommended for lunch. Ozlo is a “friendly AI sidekick” that, for now, recommends meals based on user preferences in a messaging format. It’s in a closed beta, but if you’re up for experimenting, it’s not steered me wrong over the last few weeks of travel and in-town meal spots. It suggested a place that neither one of us had ever heard of, and I was quite frankly skeptical. But with the wait and a grumpy baby, we decided to try it out. Ozlo didn’t disappoint. The place was tremendous and we both loved it and promised to return often. Thanks, Ozlo.

Over lunch, we discussed Ozlo and Amy, and how personal AI assistants were going to rapidly replace the tortured experience of having to do something like visit a cell provider store for a device upgrade (of course, we could have just gone to a Best Buy or ordered straight from Apple as I do for my own devices, but most people visit their cell provider’s storefront). I said that I couldn’t wait to message Amy and tell her to find the best price on the iPhone SE 64 gig Space Grey version, order it, have it delivered next day, and hook it up to my Verizon account. Or message Amy and ask her to take care of my traffic ticket with the bank account she has access to. These are menial tasks that can somewhat be accomplished with “human” powered services like TaskRabbit, Fancy Hands, or the new Scale API. However, I’d like for my assistant to be virtual in nature because I’m an only child and I’m not very good at trusting other people to get things done in the way I want them done (working on that one!). Plus, it “feels” weird for me to hire out something that I “don’t really have time to do” even if they are willing and more than ready to accept my money in order to do it.

Ideally, I can see these personal AI assistants interfacing with the human services like Fancy Hands when something requires an actual phone call or physical world interaction that AI simply can’t (yet) perform such as picking up dry cleaning.

I don’t see this type of work flow or production flow being something just for elites or geeks, either. Slowly but surely with innovations like Siri or Google Now or just voice assisted computing, a large swath of the population (in the U.S.) is becoming familiar and engaging with the training wheels of AI driven personal assistants. It’s not unimaginable to think that very soon, my Amy will be interacting with Merianna’s Amy to help us figure out a good place and time to meet for lunch (Google Calendar is already quasi doing this, though without the personal assistant portion). Once Amy or Alexa or Siri or Cortana or whatever personality Google Home’s device will have is able to tap into services like Amy or Scale, we’re going to see some very interesting innovations in “how we get things done.” If you have a mobile device (which most adults and growing number of young people do), you will have an AI assistant that helps you get very real things done in ways that you wouldn’t think possible now.

“Nah, this is just buzzword futurisms. I’ll never do that or have that kind of technology in my life. I don’t want it.” People said the same thing about buying groceries or couches or coffee on their phones in 2005. We said the same thing about having a mobile phone in 1995. We said the same thing about having a computer in our homes in 1985. We said the same thing about ever using a computer to do anything productive in 1975. We said the same thing about using a pocket calculator in 1965.

In the very near future of compatible API’s and interconnected services, I’ll be able to message this to my AI assistant (saving me hours):

“Amy, my client needs a new website. Get that set up for me on the agency Media Temple’s account as a new WordPress install and set up four email accounts with the following names. Also, go ahead and link the site to Google Analytics and Webmaster Tools, and install Yoast to make sure the SEO is ok. I’ll send over some tags and content but pull the pictures you need from their existing account. They like having lots of white space on the site as well.”

That won’t put me out of a job, but it will make what I do even more specialized.

Whole sectors of jobs and service related positions will disappear while new jobs that we can’t think of yet will be created. If we look at the grand scheme of history, we’re just at the very beginning of the “computing revolution” or “internet revolution” and the keyboard / mouse / screen paradigm of interacting with the web and computers themselves are certainly going to change (soon, I hope).

“We believe that a computer that can read and understand stories, can, if given enough example stories from a given culture, ‘reverse engineer’ the values tacitly held by the culture that produced them,” they write. “These values can be complete enough that they can align the values of an intelligent entity with humanity. In short, we hypothesise that an intelligent entity can learn what it means to be human by immersing itself in the stories it produces.”