How a Computer Beat the Turing Test by Pretending to Be a 13-Year-Old Boy

Editor, UK

Big news this weekend: a computer program reportedly passed the Turing test, in what experts are claiming to be the first time AI has legitimately fooled people into believing it’s human.

The genius machine who won this accolade is known as Eugene Goostman, a program that was developed in Russia in 2001 and goes under the character of a 13-year-old Ukrainian boy. In a series of chatroom-style conversations, “he” managed to convince 33 percent of a team of judges that he was human—which places his performance just over the 30 percent pass mark that Alan Turing wrote he expected to see by the year 2000 in his paper “Computing Machinery and Intelligence.”

“We stuck to the Turing test as designed by Alan Turing in his paper; we stuck as rigorously as possible to that,” Kevin Warwick, one of the organisers of the event at the Royal Society in London this weekend, told me. Deputy chancellor for research at Coventry University and considered by some to be the world’s first cyborg, Warwick knows a thing or two about human-computer relations, and I asked him more about how the test went down.

Here’s what happened: Thirty judges had conversations with two different partners on a split screen—one human, one machine. After chatting for five minutes, they had to choose which one was the human. “It’s quite a difficult task for the machine because it’s not just trying to show you that it’s human, but it’s trying to show you that it’s more human than the human it’s competing against,” said Warwick. Five machines took part, but Eugene was the only one to pass, fooling one third of his interrogators.

That might not sound like a huge win, but Warwick explained that 33 percent is actually pretty impressive given the nature of the test. If you had two humans pitted against each other, the maximum either could really hope to get would be 50 percent—otherwise their competitor would be deemed inhuman.

Warwick put Eugene’s success down to his ability to keep conversation flowing logically, but not with robotic perfection (or robotic nonsense, as we’ve seen with previous chatbots like Cleverbot). Eugene can initiate conversations, but won't do so totally out of the blue, and answers factual questions more like a human—“and that doesn’t mean giving the right answer.” To err is human, and responding to a factual question with “I don’t know” is more convincing than reciting an encyclopaedic-style answer.

Eugene’s successful trickery is also likely helped by the fact he has a realistic persona. Warwick said you don’t pick up so much on the fact that he’s Ukrainian, but it’s clear he’s a teenage boy (and some of the “hidden humans” competing against the bots were also teenagers). “In the conversations it can be a bit ‘texty’ if you like, a bit short-form. There can be some colloquialisms, some modern-day nuances with references to pop music that you might not get so much of if you’re talking to a philosophy professor or something like that,” he said. “It’s hip; it’s with-it.”

Warwick conceded the teenage character could be easier for a computer to convincingly emulate, especially if you’re using adult interrogators who aren’t so familiar with youth culture. Ask Eugene about the Beatles and he’ll go on to tell you about a cooler band you’ve never heard of. Just like a real teen.

So what are the implications of this computing milestone? Is this the end of believing anyone you meet on the internet is not only who they say they are but what they say they are, too? Well, it’s certainly a step in that direction—and that could have sinister applications. But Warwick also suggested that AI like Eugene could work on the other side of the cybercrime battle too. He painted a future where bots could sit online 24/7 and help with monitoring potential criminal activity for law enforcement: “Machines fighting machines, as it were, would be the end result.”