Advertise

Legal

Contact

AI scored on par with a four-year old

Despite decades worth of research, unbelievable computing power and sophisticated algorithms, one of today’s best artificial intelligence can’t score better than a four year old on a standard IQ test.

Image: hwalls.com

There’s a double purpose to artificial intelligence, according to Herbert Simon one of the field’s pioneers. One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that. The other is to use a computer’s artificial intelligence to understand how humans think. Essentially, by building artificial replicas of the human brain we might understand some of the fundamental tenants that make us human, like consciousness. Maybe some day will answer the long lasting question of whether or not we have a soul.

It all sounds extremely exciting, but progress is slow even though it might not look like it. In 1997, IBM’s Deep Blue computer defeated world chess champion Gary Gasparov. Then, in 2011 Watson defeated the best Jeopardy! human player ever. Both made headlines, but it’s important not to lose sight of the fact that these machines, when taken out of their natural setting (i.e. made to do something they weren’t program to do) are plain stupid.

Even the best ones – the kind made to ‘think’ like a human – have problems reasoning like we do, as Stellan Ohlsson demonstrated. Ohlsson, a computer scientist at University of Illinois, reprogrammed ConceptNet, one of the most famous AI under constant development at MIT since 1990, so it could answer questions on an IQ test destined to children. The test, called the Wechsler Preschool and Primary Scale of Intelligence test, assess performance in five categories: information, vocabulary, word reasoning, comprehension, and similarities.

For “information” related questions, ConceptNet had to answer questions like “Where can you find penguins?”, while in “vocabulary” the computer had to know “what is a house?”, for instance. In these categories, as well as in word reasoning or similarities where the computer had to know that “pen an pencil are both___”, ConceptNet fared alright. On the comprehension test, however, the computer failed miserably. When asked “why do people shake hands?”, the AI hilariously answered because of “epileptic fits”. There were other instances where ConceptNet failed the mark. During the word reasoning part, the AI was given the following clues “This animal has a mane if it is male,” “this is an animal that lives in Africa,” and “this a big yellowish-brown cat.” Instead of lion, the AI came up with the following answers in order of the value assigned to each one: dog, farm, creature, home,andcat.

“Common sense should at the very least confine the answer to animals, and should also make the simple inference that, “if the clues say it is a cat, then types of cats are the only alternatives to be considered,” say Ohlsson and co.

“The ConceptNet system scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for five- to seven-year-olds,” they say.