The author is a Forbes contributor. The opinions expressed are those of the writer.

Loading ...

Loading ...

This story appears in the {{article.article.magazine.pretty_date}} issue of {{article.article.magazine.pubName}}. Subscribe

Guest post written by Ilya Gelfenbeyn

Ilya Gelfenbeyn is the co-founder and CEO of Speaktoit, which develops natural language virtual assistants for Android, iOS and Windows Phone.

Ilya Gelfenbeyn

When Siri launched on the iPhone 4S in October 2011, tech bloggers raced one another to find the Easter eggs – those hidden gems of personality and jokes programmed into Siri’s robotic (but yet strangely comforting?) female voice. Apple even opened its developers conference this year with a faux standup routine from Siri herself, playing to a crowd eager to assign a personality to the personal assistant in their pocket.

Sure, Siri (and apps like it on Android and Windows Phone) have utility and can make a lot of processes on our smartphones more efficient. But I believe we’re still very much in the “novelty” phase of the virtual assistant timeline. These phone apps will certainly continue to improve in their functions and voice recognition prowess, but we’re going to begin to see a second era of the virtual assistant, and it’s going to spread out of our pockets and into more facets of our daily lives.

Automobiles

Your car is the logical next step for the virtual assistant, and we’re already starting to see progress toward that integration. The dangers (and illegality) of using a smartphone while behind the wheel make it a perfect pairing. Bluetooth has been great for drivers taking hands-free calls in cars, but putting a virtual assistant in the passenger seat (or, more accurately, in the dashboard) truly eliminates the temptation to send a quick text in traffic, or to answer an email at a stoplight.

Navigation help will also be a role of your automotive virtual assistant, but in a more proactive way than GPS can offer. For example, your virtual assistant could ask, as soon as you get into the car, if you’re heading to that meeting downtown that starts in half an hour – after all, your assistant knows your schedule. Confirm that, and the assistant can take care of directions, traffic, and help you find a nearby parking garage. Another example: your virtual assistant, connected with the inner workings of your car, will be able to tell you if something is amiss, from engine trouble to a reminder you’re due for an oil change. It can then decide how and when it should notify you. Take gasoline, for instance. You’re running low and your virtual assistant proactively advises that there’s a gas station three miles away.

Homes

So-called “smart homes” have so far centered mostly around the idea of automating processes like lighting, security and HVAC systems. As virtual assistant technology become embedded in some of our home appliances and gadgets, we’ll still see some of this automation, but it will be more personalized, better integrated, and more widespread around our homes.

Imagine putting popcorn into your microwave, telling your microwave assistant what you just put in, and it taking care of the rest. Maybe you’re sitting down to watch the nightly news and the assistant in your TV, knowing you’re a fan of Bruce Willis, asks if you want to record Die Hard the next day. Or, knowing from your calendar that you have a trip to Peru next month, recommends a Travel Channel show on Machu Picchu. Remember too that our virtual assistants are all connected. Our car is now connected to our home. For example, our car could signal to our house that we’re ten miles away and headed home – it’s time to flip on the A/C or turn on the lights.

New (and near future) technology

Some of the best implementations of our natural language virtual assistants could come with devices that haven’t been fully developed yet, like Google Glass. This research and development project consists of a hands-free, head-mounted display that uses augmented reality to give you a whole new screen (i.e. the physical world in front of you) on which to display information. Google Glass would use natural language voice recognition to operate, making it a perfect candidate for virtual assistant integration. Further down the road we might see the virtual technology embody something physical, i.e. a robotic assistant capable of performing actionable tasks, not unlike Rosie from The Jetsons. These implementations may still be a few years off, but speech recognition in virtual assistants (which includes understanding context, and not just words) is paving the way for these seemingly science fiction technologies to be realized.

The long-term success of virtual assistants will depend on two factors: natural language speech recognition and the range and utility of the assistant’s functions. The former is critical because we shouldn’t have to learn robotic phrases to speak to our assistant – that cripples efficiency. The latter is important because the technology has to matter, it has to enrich (or simplify) our lives. I believe virtual assistants will succeed on both accounts and that we’ll soon be interacting with them several times a day, through several different objects or devices.