AI debate: "What's a dog?"

Odds are, this child knows now what a dog is. For artificial intelligence, the learning process isn't as straightforward, nor as quick.

Learning is a concept humans take for granted. A child touches a hot stove and learns through pain that doing so hurts. A machine may register an out-of-range temperature, but this happens because a human programmed in a rule that contact with something too hot can cause damage.

The debate now is whether artificial intelligence can someday transcend that and truly “learn.” At the recent Money 20/20 Conference, a deep-dive seminar about AI featured many practical applications in financial services, but also included philosophical discussions. Two key voices were Ray Kurzweil, a futurist now at Google, and Steve Wozniak, cofounder of Apple. (Wozniak appeared in person; Kurzweil appeared via FaceTime.)

“I grew up thinking that a computer couldn’t get anywhere near a human brain, and never would,” said Wozniak. He recalled seeing early demonstrations of machine intelligence, such as watching a computer place a blue ball in a matching blue box. He dismissed that at the time merely as a machine following rules.

“We have to tell Watson almost exactly what to do,” he said. “That’s not how the brain works.”

Apple's Steve Wozniak has his doubts about artitificial intelligence.

He notes how an infant quickly learns what a dog is, from one or two encounters. “You don’t need to show an infant 80,000 pictures of dogs,” he said, whereas that’s what’s needed to “teach” a machine to recognize dogs. “Watson has to be tightly trained.”

Kurzweil, on the other hand, has been building a vision in his research and a series of books that predicts a coming together of human intelligence and AI—he calls this the point of singularity. At the far end, he hopes to establish human immortality by pouring what a person becomes over a life into a silicon being.

Right now, Kurzweil agrees that people and computers don’t learn the same way. Humans tend to learn linearly, while machines learn in multiple dimensions simultaneously.

“In order for deep learning”—the far end of AI right now—“it has to be annotated regarding what it means,” he said.

In the human brain, the hypothalamus, he noted, is where scientists believe people can take experience and turn it into new behaviors.

And some things stymie humans that machines have no trouble with. He gave the example of reciting the alphabet backwards. To a machine, it’s just reordering data.

Google's Ray Kurzweil believes artificial intelligence is on the cusp of major advances.

Kurzweil believes machines can be built to overcome the differences. At Google, he and his team built a hierarchical model—in brief, one that works more like a brain.

“This can learn from much less data,” he said, beginning to approach human ability. In the future, machines and human thinking will merge.

Wozniak spoke of how technology is part of his life. So far, he isn’t overly impressed by Tesla’s autonomous driving software. It often seems to want to get off at the wrong exit, for example. He said the height of AI will be “Level 5,” and that Tesla only ranks at “Level 2.” But he’s impressed by how far Siri has come.

He also built on Isaac Asimov’s Three Laws of Robotics, a science-fiction construction, on how robots should safeguard humanity. “My law, Woz’s law, is that a human being should never harm a robot.” No unplugging them to erase their memories, for example.