Scientists Develop Curious Robots

6/3/2017 11:09:59 PM

Saturday, June 3, 2017 - 23:09

Computer scientists have programmed machines which are able to explore their surroundings on their own and learn for the sake of learning.

Science Magazine reports that the new approach could allow robots to learn even faster than they can now and might even surpass human scientists in forming hypotheses and pushing the frontiers of what is known.

George Konidaris, a computer scientist who runs the Intelligent Robot Lab at Brown University and was not involved in the research stated that developing curiosity is a problem that is core to intelligence and is going to be most useful when you’re not sure what your robot is going to have to do in the future.

Hester and Peter Stone, a computer scientist at the University of Texas in Austin, developed a new algorithm, Targeted Exploration with Variance-And-Novelty-Intrinsic-Rewards (TEXPLORE-VENIR) that relies on a technique called reinforcement learning.

This program tries something, and if the move brings it closer to some ultimate goal, such as the end of a maze, it receives a small reward and is more likely to try the maneuver again in the future.

DeepMind has used reinforcement learning to allow programs to master Atari games and the board game Go through random experimentation. But TEXPLORE-VENIR, like other curiosity algorithms, also sets an internal goal for which the program rewards itself for comprehending something new, even if the knowledge doesn’t get it closer to the ultimate goal.

Hester and Stone tested their method in two scenarios. The first was a virtual maze consisting of a circuit of four rooms connected by locked doors and then tried their algorithm with a physical robot, a humanoid toy called the Nao.

Intelligently inquisitive bots and robots could show flexible behavior when doing chores at home, designing efficient manufacturing processes, or pursuing cures for diseases. Hester says a next step would be to use deep neural networks, algorithms modeled on the brain’s architecture, to better identify novel areas to explore, which would incidentally advance his own quest: “Can we make an agent learn like a child would?”