Beyond Siri: The AI revolution coming from the web

May 11, 2016

14 Views

7 Min Read

From HAL in 2001: A Space Odyssey to Samantha in Spike Jonze’s Her, for decades we have been obsessed with the idea that artificial intelligence powered-computers will one day be able to interact with people, follow spoken instructions and make decisions independently, like a human would.

Since Siri hit our screens on the iPhone 4, Google, Facebook, Amazon, Microsoft and Baidu have entered the playing field, too. But while each new generation brings its lot of interesting new features or use cases, they are still far off from the AI representations in movies. It’s hard to imagine anyone engaging in a romantic relationship with Siri, or NASA putting Alexa in control of a spacecraft just yet.

Movies have set the bar pretty high, and we are still waiting for such ubiquitous voice-controlled AI assistants to enter the real world. However, the era of an intelligent assistant that can really help us in our day-to-day lives is certainly closer than we may think.

Passive assistants that rely on user input

When talking about intelligent personal assistants, people tend to think about Siri, Cortana or Amazon Echo. More tech-savvy users might also have heard of Viv from the founders of Siri, Facebook’s recent contribution M and other messaging-based AI tools like Operator or Magic. However, while these new tools have been getting a lot of hype lately, when it comes to their use cases, most of them are still stuck in the realm of glorified Q&As.

These acquisitions came after recent breakthroughs in GPU-accelerated deep-learning techniques have made it possible to reach exceptional improvements in pattern recognition, with great applications for speech recognition and computer vision. According to Tim Tuttle, founder and CEO of Expect Labs, within the next two years machines should be able to follow spoken instructions even better than humans do.

Teaching computers about context in human behavior is a colossal task.

In simpler terms, these are the building blocks of artificial general intelligence, and speech recognition is just one aspect. Speaking can be a convenient alternative to tapping on a keyboard when your hands are busy, but then your voice is just a medium, and it doesn’t always shine as the best input method. How many times have you started asking Siri something and ended up typing your own query on Google?

Context is the hard part

The team behind Viv believes they can build a better personal assistant, using advanced deep-learning techniques to make machines teach themselves how to solve problems. While they’re understandably keeping their secret sauce close to their chest, the information disclosed so far implies that it requires some human guidance in order to build use cases. In the same way a human can learn to solve a problem using clues given by someone who knows how to solve it, they guide the AI to find its own methods to solve problems.

However, the comparison with humans stops here, because, unlike machines, we have the ability to autonomously build on top of our knowledge by contextualizing problems and finding original solutions to solve them. We naturally “connect the dots” to find answers and make decisions, while current AI implementations often fail to associate problems with something essential and very hard to formalize: the context that surrounds them.

Context is what gives AI the ability to form more intelligent decisions rather than solely relying on well-defined input instructions. As such, it links the past, present and future to solve sophisticated problems. Professor Patrick Brézillon from the University of Paris argues: “In Artificial Intelligence, the lack of explicit representation of context is one of the reasons of the failures of many Knowledge-Based Systems.” There is indeed a lot to grasp.

Teaching computers about context in human behavior is a colossal task; people are not always predictable and the variety of situations is basically endless. At a personal level, using machine-learning techniques to understand someone’s way of dealing with social interactions and decision making would involve countless hours of user input. That could probably be performed by observing you 24 hours a day but, since mind reading doesn’t exist yet, you would also need to be expressing your reasoning out loud so the machine could learn to think like you.

Harnessing the power of the Internet

Machine learning requires a lot of data. For natural language processing, the data is generally collected in a corpus, which is a large structured set of texts that can be used to train the AI. To give you an idea of just how large data sets can be, when Watson beat the human champion of Jeopardy, it had previously ingested the entire Wikipedia database.

What is interesting about the IBM Watson story is that the corpus it was fed required no prior structuring, which means Watson was able to use the data without human supervision. Now what if M had a similar training, with the further goals of being able to converse and perform elaborate tasks? What would the model be and where could we find the appropriate data?

The era of an intelligent assistant that can really help us in our day-to-day lives is closer than we may think.

The Internet contains millions of hours of talks, videos, books, data and everything that would allow for neural networks to build intelligence. You want to teach a machine about love? Feed it Romeo and Juliet and other romance novels. About business? Plug it into The Wall Street Journal’s news feed. DeepMind recently gave us a taste of what could be done, teaching language to their AI with a database of more than 300,000 articles from CNN and the Daily Mail.

The data is right there, and for now we only seem to be scratching the surface. But another wave of progress in machine learning will soon allow us to make even more sense of the exabytes of information the Web contains, and such advancement will mark a huge step in the evolution toward an Artificial Superintelligence.

Aside from scraping the oceans of unstructured data available online, a much closer future that could make our lives easier is already at hand. Every day humans make hundreds of decisions online, and every time we click on a link, those clicks are recorded by advertising and analytics companies across multiple websites. Imagine if that information was instead used by an AI dedicated to understanding your browsing preferences as you navigate and collect the relevant information in your browsing, along with that of millions of other users, to determine patterns out of the data.

It would not only be able to offer you a more personalized and contextualized experience of the Web, it would also understand your intent better and be able to anticipate your needs before you even express them! And voilà, an AI personal assistant that could really lighten the load using technology that already exists.

Researchers predict we’ll have to wait at least another decade before we can truly experience the greatness of ubiquitous human-like intelligence. Meanwhile, the Internet is arriving at a stage where it is gathering all the necessary ingredients for a huge leap in terms of AI, and already has a lot of bots, scrapers, analytics and other APIs harvesting our online data of which we could take advantage.

So let’s disconnect images in the movies from the real progress that’s happening before our eyes, and realize that the greatest chance we have of creating an AI that can really make our lives easier is by harnessing something we use every day.