DeepMind’s Agent57 AI beats human players at 57 Atari games

If you had an Atari console growing up, your mom probably told you to stop playing. For Agent57, a new artificial intelligence (AI) from DeepMind, the opposite is true. After spending countless hours training, the sophisticated AI bested human benchmark scores in all 57 Atari 2600 games.

It’s a vast improvement over previous AIs, which struggled to find success in all of the titles. For DeepMind, it is the latest experiment in a crusade to dethrone human players across the world of video games. However, Agent57 (and AI in general) still has a long way to go to rival the versatility of humans.

Master of Atari

For many people, Atari games are highly nostalgic. For AI developers, they are a perfect testing ground. That’s why researchers created the Arcade Learning environment. It allows them to test the limits of their deep-learning systems while gathering helpful data to make changes for the future.

Advertisement

The researchers behind Agent57 said, “Games are an excellent testing ground for building adaptive algorithms: they provide a rich suite of tasks which players must develop sophisticated behavioral strategies to master, but they also provide an easy progress metric—game score—to optimize against.”

They trained Agent57 with versatility in mind. The AI individually learned how to play all 57 games in the Atari 2600 lineup before excelling at them. Previous algorithms trained to do the same fell short in several titles. Games like “Pitfall,” “Skiing,” and “Montezuma’s Revenge” posed the biggest challenges. For instance, in a game like “Skiing” the AI must go through long periods without knowing if its actions are correct. That often leads to mistakes.

To address this challenge, DeepMind created Agent57 with an updated version of Deep-Q—the first AI to beat several Atari games. Among the improvements is a meta-controller. This allows the AI to weigh the pros and cons of acting immediately or exploring more. By taking its time to learn, the system was able to find more success in long-term games like “Skiing.”

Getting More Flexible

It’s fairly easy to train an AI to excel at one task. For example, a chess-playing AI can beat even grandmasters without much trouble. However, training a deep-learning network to excel at multiple tasks is exponentially more difficult. In fact, it is one of the biggest challenges facing the AI world today.

Despite Agent57’s impressive performance at so many games, it is still only able to learn one at a time. When it moves on to a new title, it must re-learn the entire game. On the other hand, human players can easily switch between titles, using their previous experiences to continue improving at multiple games simultaneously.

That sort of versatility simply isn’t a reality yet for artificial intelligence. However, experiments like the one done with Agent57 could help unlock ways to make AI more flexible. Although human-like skills are still a long way off, progress is being made. For the future of AI, that’s a good thing.

Hey, there! I'm Cody. I am a freelance writer specializing in blog content for the tech and health industries. In my free time, I'm currently working on my debut YA sci-fi novel.
When I'm not working, you can find me thinking about the latest gadget, playing Rocket League, or thinking of better ways Game of Thrones could have ended.
If you're a fan of (sometimes) funny ramblings, give me a follow on Twitter (@CodyDeBos).
For business inquiries feel free to get in touch at:
[email protected]