Robots Are Now Learning To Reason Like Humans

DeepMind has developed a neural network that can do more than just process information.

It’s no secret that a lot of people, including some very influential personalities, harbor the fear that AI or artificial intelligence could one day surpass human intelligence and take over the world. But while such a threat may not be as far-fetched, right now, there’s one big stumbling block that’s keeping robots and machines from even scratching the surface of what it is like to be human. And that’s the ability to reason and think logically.

Researchers have been trying to figure out how to incorporate this ability into computers. Now Google’s DeepMind might be moving ahead in the race as it has developed an algorithm that enables AI to handle the most basic form of reasoning, that is, relational reasoning.

What is relational reasoning? It’s a form of thinking that makes use of logic to connect and correlate objects, places, patterns, sequences and other entities. It’s what we humans use to decide which is the best bunch of grapes at the grocery, or what the evidence present in a crime scene means. It’s something intuitive and intrinsic in us, which is probably what makes it so difficult to teach to AI. Because unlike performing a simple, repetitive or manual task, recognizing something, then relating it to something else in the right context is not as simple.

What DeepMind built is a neural network that can be connected to other neural networks to give them a ‘combined power’ to do relational reasoning. They trained the AI using images of 3D shapes with various colors and sizes, allowing it to analyze pairs of objects and basically forced it to work out the relationship between them. They then asked questions that incorporated shapes, sizes, colors and positioning in a pattern like: does the red-colored object behind the green thing have the same shape as the blue-colored object to the left of the gray-colored thing?

Amazingly, the system was able to answer the questions correctly almost 96% of the time. Humans only scored 92%, while other machine-learning algorithms scored between 42% to 77%.

DeepMind also tried a language-based task, and the results were similar. Its neural net scored an astounding 98% while other algorithms scored a low 45%.

Notwithstanding this impressive feat, study lead Adam Santoro of DeepMind is quick to point out that there’s still a long way to go before practical applications can be implemented. At best, the ability to understand and recognize differences in color, shape and size can initially be used for computer vision. As he told New Scientist: “You can imagine an application that automatically describes what is happening in a particular image, or even video for a visually impaired person.”

But true logic and reasoning requires so much more than mere understanding of physical features. Which means, unless accelerated progress is made on this front, humans have nothing to fear because we will remain superior over AI, for the time being that is.