What didn’t Watson the computer do?

I don’t pretend to know much more about computers than Stanley Fish does. But I do know a little bit more, and that little bit is the crucial bit.

Yesterday, Fish wrote a column in the New York Times belittling IBM’s achievement in developing a computer that can beat the best human contestants on Jeopardy. Fish compared Watson to an automated word-completion algorithm on his computer that is incorrect with frustrating frequency. Watson, Fish claims, isn’t any better.

Except that Watson is better. Unlike Fish’s program, Watson gets its answers right nearly all the time. Fish is unimpressed:

it has no holistic sense of context and no ability to to survey possibilities from a contextual perspective; it doesn’t begin with what Wittgenstein terms a “form of life,” but must build up a form of life, a world, from the only thing it has and is, “bits of context-free, completely determinate data.” And since the data, no matter how large in quantity, can never add up to a context and will always remain discrete bits, the world can never be built.

I’m sure Fish knows much more about Wittgenstein than I do, but I dispute this notion that there is some “form of life” that only humans, and never computers, can comprehend. Fish asserts that the computer has no “context,” but if Fish had seen the Nova documentary on Watson, he would have seen some impressive demonstrations of Watson seeming to build an understanding of context on the fly. In one case, Watson appeared to “learn” that all the responses in a particular category were names of months, missing its first answer, but seeing the error of its ways after the human contestants began to get correct “month” responses. How is this different from how humans learn about context?

In attempting to diminish IBM’s achievement with Watson, Fish quotes from a 39-year-old critique of artificial intelligence: A computer can’t adapt, Fish claims, and can “at best be programmed to try out a series of hypotheses to see which best fit the fixed data.” But this isn’t the way Watson is “programmed” at all. In fact, Watson was trained using machine learning, a technique that didn’t exist in the 1970s when Dreyfus wrote his screed against AI. Humans didn’t give Watson a set of rules, they gave it a procedure by which to develop its own rules. Then they fed it immense quantities of information, correcting it when it responded incorrectly, so that it could adapt its rules to the new information.

Here’s Fish’s understanding of what Watson is doing: “It decomposes the question put to it into discrete bits of data and then searches its vast data base for statistically frequent combinations of the bits it is working with.” Perhaps Watson is doing that, but clearly Watson is doing much more than that as well. Decomposes how? What does it do once it finds those combinations of bits? How does Watson decide which bits are most relevant? Fish doesn’t know, and in many ways, Watson’s programmers don’t know either. That’s what machine learning is, and machine learning is only going to get more powerful in the future. We teach machines how to create their own rules, then they learn things using that system.

They’re not as good as humans in many fields, but they’re already better than humans in some, like Jeopardy and chess. Maybe computers will never catch up to humans in some fields like art and poetry. But for most things that most of us do every day? Given enough time — say, 100 years — I wouldn’t bet against the computers.

Update: After thinking about this a bit, I have to say unless historical events end up massively impeding technological progress, that 100-year bet is way too conservative. Computers have only been around for about 70 years, so 100 years is like an eternity. I wouldn’t be surprised to see this come to pass in as little as 30 years. That isn’t to say humans wouldn’t be needed at that point; just that our relationship with computers will be very different from what it is now. We will start to interact with them more like we do with other people. We won’t think “how can I get this computer to do what I want?” We’ll think, “how can I work together with the computer to get the job done?”