In A Nutshell

For over 65 years, computer scientists have used the Turing Test as a way to judge computer intelligence. Programs that pass the test (by convincing a group of people that they are having a conversation with another real human) are said to exhibit human-like intelligence. In 2014, a computer finally beat the test. Although some in the AI industry are unimpressed by the feat, there’s no denying the machine met all the required criteria.

The Whole Bushel

In 1950, computer scientist and mathematician Alan Turing posed the question, “Can computers think?” Recognizing that “thinking” was a term hard to define, he then wondered if a machine could exhibit intelligent behavior that was indistinguishable from a human. With that notion, he designed the Turing Test, which is now the benchmark in gauging computer intelligence. The test works by having a person ask a series of questions to two entities, one of which is human and the other computer. Based on a five-minute text conversation, it’s the interrogator’s job to determine which entity is human and which one is machine. If the computer can trick 30 percent of all interrogators, it’s said to have human-like intelligence.

Sixty-five years after Turing came up with the idea and on the 60th anniversary of his death, a computer finally passed the test. It happened in June 2014 when a Russian-designed program successfully disguised itself as a 13-year-old Ukrainian boy named Eugene Goostman. Eugene duped 33 percent of the human interrogators and was the only computer, out of five entries, to pass the 2014 Turing Test held in the Royal Society in London. While other programmers have previously claimed to have passing computers, this was the only incident where a program succeeded by strictly sticking to Turing’s test design, with independent verification and questions that aren’t arranged before the conversation.

As with most major “wins” in history, Eugene’s was not without controversy. Some claim the computer succeeded more out of cheap trickery than legitimate intelligence. They argue that using a teenage boy who’s speaking in a second language gives the program an unfair advantage, because the questioners wouldn’t expect Eugene to have fully developed speech or to pick up on the subtle nuances of language, such as sarcasm. In other words, a boy speaking a newly learned language might come across very similar to a computer. Consequently, some in the artificial intelligence community are wholly unimpressed with Eugene’s accomplishment and also say that fooling 30 percent of people isn’t very significant anyway. They argue the standard should be higher, 50 percent or more.

Those who want to judge for themselves can talk to the online version of Eugene at (although it appears as if Eugene is “away” from his website at time of publication). Regardless of whether a computer technically passing the Turing Test is as big a deal as some make it out to be, there’s no doubt artificial intelligence is getting better all the time. This, of course, means our much anticipated robot slaves aren’t far behind.