That's what happened with neural networks, a technique inspired by our brains' wiring that has recently succeeded in translating languages and driving cars. Now, another old idea—improving neural networks not through teaching, but through evolution—is revealing its potential. Five new papers from Uber in San Francisco, California, demonstrate the power of so-called neuroevolution to play video games, solve mazes, and even make a simulated robot walk.

Neuroevolution, a process of mutating and selecting the best neural networks, has previously led to networks that can compose music, control robots, and play the video game Super Mario World. But these were mostly simple neural nets that performed relatively easy tasks or relied on programming tricks to simplify the problems they were trying to solve. "The new results show that—surprisingly—you may actually not need any tricks at all," says Kenneth Stanley, a computer scientist at Uber and a co-author on all five studies. "That means that complex problems requiring a large network are now accessible to neuroevolution, vastly expanding its potential scope of application."