Menu

The Recent Hype About the Turing Test — And Why It May Not Matter

The blogosphere has been full of the news that a Russian chatbot convinced 33% of judges at the recent Royal Society sponsored Turing test that it was human. There’s apparently nothing unusual or advanced about the technology behind Eugene Goostman, the chatbot that’s scripted to have the personality of a 13-year old Ukranian with poor English skills. It’s a pattern-matching chatbot with a typical backstory (teenager, flippant, weak English) that makes its confusing and off-topic answers seem more forgivable.

Since all the media hype about “the computer that passed the Turing test,” there’s been a bit of a backlash from more astute observers of the AI world.

Doug Aamoth published a short piece on Time online that recounts his brief conversation with the Goostman chatbot. The conversation has all the markings of a shaky exchange with a pattern-matching chatbot. There’s very little in the conversation to make Doug believe he’s talking with a human. As I wrote in my post The Problem With Today’s Chatbots, pattern-matching dialog programs are just very, very limited in their ability to mimic real human conversation. To be convincing, a program has to be able to react to completely unanticipated questions and comments. Simply diverting the conversation to another topic, or coming up with some generic, hollow comment, isn’t how conversation works.

If Eugene Goostman’s performance tells us anything, it’s probably that humans can be fooled by chatbots, especially when they want to be. But we already knew that. There are all kinds of bots out in the world and they fool people all the time. Maybe what we need is a training course on “How to Know You’re Talking to a Chatbot.” But that might just spoil people’s fun.