Jun

7

Science fiction is rife with intelligent, self-aware computers, from the benevolent “Mike” of Robert A. Heinlein’s The Moon Is a Harsh Mistress to the murderous HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey.

But before we can actually design and build super-smart machines like those in our books and movies, we need to better understand the nature of human intelligence. How do we learn and reason? What’s going on in our own heads that can be applied to our computers?

That’s where Josh Tenenbaum comes in. As a professor in the department of brain and cognitive sciences at the Massachusetts Institute of Technology, he’s using a combination of mathematical modeling, computer simulation and behavioral experiments to explain how people learn new things.

Take the example of a child learning the meaning of a simple word, like “horse.” When a parent points at one, how does a child so easily understand that the word applies to the strange animal in the field, and not to the field itself, or the color of the animal, or any number of other possible definitions? Cognitive scientists and child psychologists have determined that children develop a complex conceptual structure, a sort of mental taxonomy of the world around them, which they use to order and define objects when they see them. They also have a bias to focus on common nouns, so they’re more likely to assume that “horse” means the animal, rather than an adjective like “fuzzy” or “brown.”

Tenenbaum’s research aims to define and describe these cognitive tools, and build mathematical models to imitate them. “These are very rapid inductive leaps to abstract knowledge that we see children making, and that we’d like to have in computers,” he says. “Children have these abstract contextual models, and we can model how these can be learned.”

It all brings us closer to the complementary goals of understanding human learning in computational terms and building computers that learn like humans. “We’re not trying to build a machine child,” says Tenenbaum, “But our long-term goal is to build machine systems that have really deep cognitive capacity like that of a child.”