Google+ Followers

Friday, February 03, 2017

Artificial intelligence: Machines that reason

Complex reasoning is a hallmark of natural intelligence, as is learning from experience. Artificial neural networks — biologically inspired computational models — also learn from examples and excel at pattern recognition tasks, such as object and speech recognition. However, they cannot handle complex reasoning tasks that require memory to be solved.

Alex Graves, Greg Wayne and co-workers at Google DeepMind have now developed a neural network with read–write access to external memory, called a differentiable neural computer (DNC). The DNC's two modules — the memory and the neural network that controls it — interact like a digital computer's RAM and CPU, but do not need to be programmed. The system learns through exposure to examples to provide highly accurate responses to questions that require deductive reasoning (for example, “Sheep are afraid of wolves. Gertrude is a sheep. What is Gertrude afraid of?”), to traverse a novel network (for example, the London Underground map), and to carry out logical planning tasks.

This work represents a major leap forward in showing how symbolic reasoning can arise from an entirely non-symbolic system that learns through experience.