Researchers say the Turing Test is almost worthless

The robot Eve undergoes a version of the Turing Test in the 2015 film "Ex Machina." A24 YouTube Channel One of the biggest misconceptions about artificial intelligence (AI) is thinking that it must pass the Turing Test to be truly intelligent.

But AI scientists say the test is basically worthless and distracts people from the real AI science.

"Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby," Stuart Russell, an AI researcher at University of California, Berkeley, told Tech Insider in an interview. "The people who do work on passing the Turing Test in these various competitions, I wouldn't describe them as mainstream AI researchers."

Named for the famed computer scientist Alan Turing, who proposed it in a 1950 paper, the Turing Test tasks a human evaluator with determining whether he is speaking with a human or a machine. If the machine can pass for human, then it's passed the test.

In fact, the program's design as a 13-year-old boy with a bad grasp of English may have been why at least 10 of the judges were fooled.

According to The Guardian, Eugene's creator Vladimir Veselov said his age made for a perfect smokescreen to the programs failings, making it "perfectly reasonable that he doesn't know anything."

Many researchers, like Gary Marcus, cognitive scientist at New York University, get frustrated when the press picks up on these kinds of stories. He told Science Magazine that such competitions test AI that are more akin to "parlor tricks" than to a "program [that] is genuinely intelligent."

Russell, who is also co-author of the standard textbook "Artificial Intelligence: A Modern Approach," said the Turing Test wasn't even supposed to be taken literally — it's a thought experiment used to show how the intelligence of AI should rely more on behavior than on whether it is self-aware.

Benedict Cumberbatch as Alan Turing in the biopic "The Imitation Game." Jack English courtesy of Black Bear Pictures "It wasn't designed as the goal of AI, it wasn't designed to create a research agenda to work towards," he said. "It was designed as a thought experiment to explain to people who were very skeptical at the time that the possibility of intelligent machines did not depend on achieving consciousness, that you could have a machine that would behave intelligently ... because it was behaving indistinguishably from a human being."

Russell isn't alone in his opinion.

Marvin Minsky, one of the founding fathers of AI science, condemned one competition called the Loebner Prize as a farce, according to Salon. Minsky called it "obnoxious and stupid" and offered his own money to anyone that could convince Hugh Loebner, the competition's namesake who put up his own money for the prize, to cancel it altogether.

Luckily, NYU researcher Marcus is designing a series of tests that focus on just those things, according to Science. One proposed test would require a machine to understand "grammatically ambiguous sentences" that most humans would understand.

For example, with the sentence "the trophy would not fit in the brown suitcase because it was too big," most people would understand that the trophy was too big, not the suitcase. Such understanding is often difficult to program, according to Science.

Marcus hopes that the new competitions would "motivate researchers to develop machines with a deeper understanding of the world."