Why we made this change

Visitors are allowed 3 free articles per month (without a subscription), and private browsing prevents us from counting how many stories you've read. We hope you understand, and consider subscribing for unlimited online access.

What Makes a Mind? Kurzweil and Google May be Surprised

One AI researcher suggests that an ambitious plan to build a more intelligent machine may be flawed.

January 21, 2013

After writing about Ray Kurzweil’s ambitious plan to create a super-intelligent personal assistant in his new job at Google (see “Ray Kurzweil Plans to Create a Mind at Google—and Have it Serve You”), I sent a note to Boris Katz, a researcher in MIT’s Computer Scientist and Artificial Intelligence Lab who’s spent decades trying to give machines the ability to parse the information conveyed through language, to ask him what he makes of the endeavor.

A cross section showing the somatosensory cortex of a mouse. Neurons, at the bottom, and dendrites, reaching up, have been colored by green fluorescent protein from jellyfish (CC BY-SA 2.0).

Here’s what Katz has to say about Kurzweil’s new project:

I certainly agree with Ray that understanding intelligence is a very important project, but I don’t believe that at this point we know enough about how the brain works to be able to build the kind of understanding he says he is interested in into a product.

I previously interviewed Katz for an article about Apple’s Siri (see “Social Intelligence”). He explained that constructing meaning from language goes well beyond learning vocabulary and grammar, often relying on a lifetime of experience with the world. This is why Siri is only capable of responding to a fairly narrow set of questions or commands, even if Apple’s designers have done a clever job of making Siri seem as if its understanding goes much deeper.

Kurzweil believes he can build something approaching human intelligence by constructing a model of a brain based on simple principles and then having that model gorge itself on an enormous quantities of information—everything Google indexes from the Web and beyond.

There are reasons to believe this type of approach might just work. Google’s own language translation technology has made remarkable strides simply by ingesting vast quantities of documents already translated by hand and then applying statistical learning techniques to figure out what translations work best. Likewise, IBM’s Watson demonstrated a remarkable ability to answer Jeopardy questions by applying similar statistical techniques to information gathered from sources including the website Wikipedia (see “How IBM Plans to Win Jeopardy!”). But this is very different from the way humans develop an understanding of the world and of language.

If it does not provide an accurate representation of how the brain works, the question is whether Kurzweil’s approach will hit a wall in terms of simply mimicking that understanding by producing really useful responses to very sophisticated questions.

Katz continues:

It is quite possible that this approach will allow his group to improve precision of Google’s search results, or to better guess what article a particular user may want to read. However, the Watson system was created to play a game, and it is great at doing that, but it had no common sense and no real understanding of even the concepts that it gave answers about. I am afraid that giving a Watson-like system an order of magnitude more data will not change this fact.

Katz’s objections make a lot of sense to me. But I think Kurzweil’s project could still have a very important impact. Even if it completely fails to deliver the kind of results Kurzweil and Google are hoping for it will push the statistical approach to AI further than ever. And so, either way, it may show where AI research should be focusing its efforts and help us understand what makes a mind a little better than before.

Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.

Share

Credit

I am the senior editor for AI at MIT Technology Review. I mainly cover machine intelligence, robots, and automation, but I’m interested in most aspects of computing. I grew up in south London, and I wrote my first line of code (a spell-binding… More infinite loop) on a mighty Sinclair ZX Spectrum. Before joining this publication, I worked as the online editor at New Scientist magazine. If you’d like to get in touch, please send an e-mail to will.knight@technologyreview.com.

You've read
of three
free articles this month.
Subscribe now for unlimited online access.
You've read
of three
free articles this month.
Subscribe now for unlimited online access.
This is your last free article this month.
Subscribe now for unlimited online access.
You've read all your free articles this month.
Subscribe now for unlimited online access.
You've read
of three
free articles this month.
Log in for more, or subscribe now for unlimited online access.
Log in for two more free articles, or subscribe now
for unlimited online access.