DeepMind Has Taught an AI to Do Something Quite Remarkable

In Brief

Researchers at DeepMind have published a paper illustrating how they are teaching artificially intelligent computer agents to traverse alien environments. While the results are slightly goofy, they represent a major step forward on the path to autonomous AI movement.

Google’s artificial intelligence (AI) subsidiary DeepMind has released a paper detailing how its AI agents have taught themselves to navigate complex virtual environments, and the results are weird, wonderful, and often extremely funny.

The agents in the simulations were programmed with a set of sensors— these allowed them to know things like when they were upright or if their leg was bent — and a drive to continue moving forward. Everything else that you see in the video — the agents’ jumping, running, using knees to scale obstacles, etc. — is the result of the AI working out how best to continue moving forward through reinforcement learning.

The complexity of the agents’ movements is a testament to how far AI has come in recent years. While agents in simulations like these often break down when faced with unfamiliar environments, DeepMind’s have utilized startlingly sophisticated movements to traverse obstacles.

The groundwork being laid by experiments such as these is pivotal to the integration of AI into society. Eventually, researchers will be able to incorporate these advancements into the programming of future AI robots, which will be able to navigate around your home or the streets, ushering in the age of truly seamless robot/human interaction.