An artificial intelligence system proved to have the intellect of a four-year-old after undergoing tests specialized for young children.

Researchers from the University of Illinois at Chicago -- led by Robert Sloan, professor and head of computer science at UIC -- put MIT artificial intelligence unit ConceptNet 4 to the test and found its scores to be very revealing about the strong and weak spots of AI today.

ConceptNet 4 was given the Weschsler Preschool and Primary Scale of Intelligence Test, which is an IQ test for youngsters. The results showed that ConceptNet 4 had the IQ of a four-year-old.

More specifically, ConceptNet 4 had very uneven scores across the board, which would typically concern those who administer the test. The AI system did well on vocabulary tests and the ability to recognize similarities, but lacked when it came to "why" -- or commonsense -- questions.

The team concluded that commonsense is where AI researchers need to focus. For instance, ConceptNet 4 may know a fact about something, but doesn't have commonsense to know what it feels like or how it will react in certain situations.

"All of us know a huge number of things," said Sloan. "As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don't appreciate having their tails pulled.

"We're still very far from programs with commonsense-AI that can answer comprehension questions with the skill of a child of 8."

This study will be presented July 17 at the U.S. Artificial Intelligence Conference in Bellevue, Washington.

Would be interesting if AI would develop a learning anti-virus system similar to an immune system. With the development of self-writing code in an AI program structure may also come new and much more indirect methods of infection such as mis-information, social AI engineering, and carefully planned/scripted events. A true AI would have to develop the ability of discernment to be able to tell what is true and what is false since a modern computer simply accepts all information given to it as true and accurate unless that information is cross-checked against other systems. Eventually, an AI would need to halt all programming interface from traditional methods and assume that all future code would need to be self-generated or risk conflicting code and contamination of it's system.