Watson in Philosophical Jeopardy?

Usually, when we talk about pop culture, we talk about things like The Simpsons and House. You know, fun stuff. But something happened on the game show Jeopardy, on Valentine's Day, and we think it is worthy of our first post.

It was a special episode. Host Alex Trebek brought back the two biggest champions in Jeopardy history. The first was Ken Jennings, who won 74 consecutive games in 2004 and 2005 and earned more than $2.5 million during his reign. The second was Brad Rutter, a five-day champ (from when they had term limits) who later returned to win three major championship tournaments, earning the most prize money ever: $3,255,102. That's cool enough.

Here's the kicker: Alex pitted them against Watson, an IBM super computer.

You might think that it would be no contest. Because it's a trivia game show, all the programmer would have to do is load the computer with a bunch of facts, and have the computer spit them out at lightning speed upon request. But if you have ever seen Jeopardy, you know it's not that easy. Jeopardy clues are in the forms of answers, and contestants must shape their responses in the form of questions. More difficult still, Jeopardy clues are layered with puns, double entendres, jokes, and other linguistic nuances. Clues in a category Inca Hoots might be about owls, or about the Inca Indians. Many people can't even understand what a Jeopardy clue is asking, much less know the proper response to a clue. A deep understanding of language and the way language works is needed to excel at the game. So can Watson, who is ultimately just a computer, understand it well enough to play Jeopardy and do so better than the best human players ever?

Whether a machine could ever think--even more specifically, understand language--is something that has fascinated philosophers for centuries. René Descartes (1596-1650) thought they couldn't because thinking was something that happened in the immaterial soul and that souls could not attach to machines. But once we discovered that thinking (and, specifically language) is directly correlated to, and a result of, the workings of neuronal connections in the brain, we saw how shortsighted Descartes was. Thought and understanding seemed to be the result of mechanistic workings. (It was eventually discovered that language processing is done in specific areas of the brain, later named Broca and Wernicke's areas.) Later, when computers were invented, and then advanced exponentially, the idea that we might one day be able to replicate those mechanistic workings was no longer farfetched. And so it seemed inevitable that, at least one day, computers would be using language like humans.

But would that constitute true understanding? Would such a computer really understand the meaning of the language it was using in the same way that a human does? Allen Turing (1912-1954) suggested a test (now known as the Turing Test). If a computer can carry on a conversation with you that is indistinguishable from that of a human, the computer must be said to understand language. The philosopher John Searle (1932--) famously disagreed, suggesting that no matter how well a computer mimicked human language use, it could not be said to understand language. According to Searle, a computer can only shuffle symbols, and that can't result in linguistic understanding. To help make his point, Searle asks us to consider The Chinese Room thought experiment, a room where a person can reproduce answers to written Chinese questions by finding the associated answers in a giant book. But all the answers are in Chinese, too. All the person is doing is scribbling un-understood symbols that he finds next to other un-understood symbols in his book. He is only shuffling symbols; he does not understand Chinese. Likewise, the symbol shuffling of a computer could never result in linguistic understanding.

Many philosophers disagree with Searle, including us. If nothing else, Searle seems to misunderstand what a computer is. We describe what a computer does in terms of symbol shuffling, sure; but I could also describe what a brain does, its neuronal firings, in terms of symbol shuffling. That doesn't make it a symbol shuffler. Ultimately, a computer is a complex collection of silicon chips sending signals to one another, much like our brain is a complex collection of neurons sending signals to one another. A computer that passes the Turing Test may very well do so by mimicking the neuronal firings of a human brain--maybe exactly, maybe approximately, or maybe only functionally (in that it gives the same output as a human brain). But since we don't know how or why our brain's neuronal firings give rise to mental states like understanding, how do we know that silicon chips mimicking our neuronal firings doesn't? And since we conclude that other people understand language because they can communicate with us, shouldn't we conclude that a computer that communicates with us understands language as well?

Of course, saying that a computer might understand language is not the same as saying that the computer has a mind, or that it is conscious and self-aware. Watson, no matter how brilliant, is still a long way from Hal in 2001: A Space Odyssey. But, depending on his success, he might be a giant leap in that direction.

So, how did Watson do?

We still have two nights of competition, but on the first of three nights (which only included a first round), Watson did remarkably well--so well that he left Ken Jennings in the dust, and made Brad Rutter play catch up. (Also remarkable: Watson found the Daily Double on his first pick.) He got nearly every answer he tried right, usually knew the answer when someone beat him to it (his thought processes were displayed on screen in real time), often refrained when he didn't know the answer, and is also apparently a big fan of the Beatles (nearly sweeping the category). The night ended with a score of $2000 (Ken), $5000 (Watson), $5000 (Brad).

Many might be critical of Watson, because his performance was not perfect. But if his performance was perfect, you would be able to tell that he was a computer; and that would make him fail the Turing Test. Even more remarkable: His mistakes were all too human. They were the same mistakes that we've seen humans make on Jeopardy before. Sometimes he would restate a part of the clue in his answer, when he didn't need to (Maxwell's Silver Hammer instead of just Maxwell's). Was he making sure he got it right?. He would form an answer based on the first part of a clue, but ignore the second part (chic instead of class). Once, he identified where the relevant anatomical oddity was--in the leg--but failed to mention the oddity. The leg was missing. He even showed that he makes human-like mistakes by giving the exact same wrong answer that Ken had just given. (Watson might have avoided this error if only Ken's answers were being fed to him so he could update his probabilities--the right answer was his second choice.)

Here is the mistake we found most extraordinary. This was the $400 clue in the category Final Frontiers:

"From the Latin for ‘end.' This is where trains can also originate."

Watson answered finis, the Latin word for end. He did not take note of the latter half of the clue, or realize that the clue was asking for an English word, not a Latin one. Mosty interesting though, Ken made the latter mistake as well. His answer was terminus instead of terminal. Trebek still gave Ken the score, but it's remarkable that both Ken and Watson misunderstood what the clue was asking for in the same way. And if they misunderstood it in the same way, didn't they understand it in the same way? Isn't Watson understanding?

Something else made us think that Watson is similar to you and me: What goes on in Watson's head is much like what goes on in ours. As described by Watson's creators, the algorithm that Watson used on many clues was very much like our own thought processes when we look at Jeopardy clues. Often, we don't fully understand the clues, but we're able to hone the answer by associating with a keyword or phrase. And this is, apparently, what Watson does best.

Did Watson pass the Turing Test? Well, being able to play Jeopardy is not the same as being able to hold a conversation. So, not quite yet. But, regardless, this is a major feat. Computing, scientific, and philosophical history are being made. We'll be watching Tuesday and Wednesday night, with great anticipation.

If nothing else, Searle seems to misunderstand what a computer is. We describe what a computer does in terms of symbol shuffling, sure; but I could also describe what a brain does, its neuronal firings, in terms of symbol shuffling. That doesn't make it a symbol shuffler. Ultimately, a computer is a complex collection of silicon chips sending signals to one another, much like our brain is a complex collection of neurons sending signals to one another.

Searle has a precise technical definition of a computer as a Turing machine and not a specific device using any particular machinery. You can build a Turing machine/computer out of just about anything. If natural laws give some machines minds, then only machines with the right "wiring" will have a mind.

From the Stanford Encyclopedia of Philosophy: "The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think—Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought." -- http://plato.stanford.edu/entries/chinese-room/

If a "machine" ever has a mind (or rational thought), then we have no reason to think it does unless it can go against its own programming in a rational way. In that case having a mind means something. If we can predict what a computer does entirely based on its programming, then "rational thought" or "having a mind" would be epiphenomanal -- completely worthless.

Thanks for your comments. Let me agree with some things, and disagree with others.

You are right about the aims of Searle's original argument. I guess I was wanting to criticize those who do try to apply it more broadly, and say it entails that no computer, or android, could ever understand language or have a mind. I should have been more specific.

I disagree with your suggestion, however, that we have reason to think that a machine can have a mind only if it can “go against its own programming.” There is not necessarily a bi-conditional relationship mind and brain, and if we one day understand our own brains well enough to be able to predict our own actions, it will not follow from this that we are not minded. It would seem to indicate that we don’t have free will, but it will not stop us from “mentating.”

Perhaps the way to put it is this: even if we don’t have free will, and thus the mind is not “worth much” it is still there. And if androids behave like us, this would be reason to think they are minded—even if they can’t “go against their programming.”