February 15, 2011

When computers exceed our ability to understand how the hell they do the things they do

Which would be pretty much now.

Great quote from David Ferrucci, the Lead Researcher of IBM's Watson Project:

"Watson absolutely surprises me. People say: 'Why did it get that one wrong?' I don't know. 'Why did it get that one right?' I don't know."

Essentially, the IBM team came up with a whole whack of fancy algorithms and shoved them into Watson. But they didn't know how these formulas would work in concert with each other and result in emergent effects (i.e. computational cognitive complexity). The result is the seemingly intangible, and not always coherent, way in which Watson gets questions right—and the ways in which it gets questions wrong.

As Watson has revealed, when it errs it errs really badly.

This kind of freaks me out a little. When asking computers questions that we don't know the answers to, we aren't going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don't know the answer ourselves, and because we don't necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.

Looking even further ahead, it's becoming painfully obvious that any complex system that is even remotely superior (or simply different) relative to human cognition will be largely unpredictable. This doesn't bode well for our attempts to engineer safe, comprehensible and controllable super artificial intelligence.

8 comments:

And what happens when Watson starts asking his(?)/her(?)its(?) own questions? Or witholding information for reasons unknown to us? I'm sure the technology isn't that advanced yet, but it's something to consider.

Well explaining how he arrived at the answer isn't part of the 'requirement spec' for Watson, but I would imagine this could be added in. So in addition to the answer, he (it?) could walk through the methods he used to arrive at it. I found these references, I assumed this meaning for an ambiguous phrase, made the following connections, etc...

Sure Watson gets the wrong answer sometimes, but it's much better than humans. So if you had such a machine, you could be pretty certain that you couldn't get better answers anywhere. That must count for something..

As with Deep Blue, this is a case of intelligent programmers imitating human thought, not duplicating it. The machine itself only runs a huge program and intrinsically has no more 'intelligence' than an answering machine.

As such massive computation becomes commonplace over the next decade or two, we will come to understand that it will never do more than supplement human intelligence, being incapable of surpassing it.

It's ironic that our discussion of AI is shielded from bots by an anti-Turing test, namely those wiggly word-verification characters we so easily read. By betting on the impossibility of AI, the inventor of this clever method is actually the first person to ever make money on AI!

George Dvorsky

Canadian futurist, science writer, and ethicist, George Dvorsky has written and spoken extensively about the impacts of cutting-edge science and technology—particularly as they pertain to the improvement of human performance and experience. He is a contributing editor at io9, the Chairman of the Board at the Institute for Ethics and Emerging Technologies and is the program director for the Rights of Non-Human Persons program.