Deepmind’s AlphaStar crushes best StarCraft 2 players

So today’s Deepmind’s StartCraft AI, named AlphaStar neural network AI thoroughly trounced a couple of top StarCraft players. The first set of matches against Dario ” TLO ” Wünsch (StarCraft II player from Germany. TLO is a former Brood War and Supreme Commander player currently playing for Team Liquid.). AlphaStar beat TLO 5 games to 0, each game was played with a very different strategy, which later was revealed by Deepmind’s team that there were 5 different variants of AlphaStar randomly sequenced.

AlphaStar during StarCraft

The game was played with variant from the Protoss variation , while TLo is best considered a Zerg Pro Player, nonetheless he gave it a try. this later would change when an actual Protoss Pro player was brought to also try his hand with Alpha Star. Grzegorz “MaNa” Komincz is a Protoss player from Poland, a teammate of TLO currently playing for Team Liquid. He also did not fair well losing all games as well. AlphaStar 5 – “MaNa” – 0.

Deepmind researchers and StarCraft Pro Players discuss strategy

Takeaways

From these games and from the commentary from the players and Deepmind we can have few takeaways.

AlphaStar much like AlphaGo has a very unorthodox style of play thatflummoxed the human players. Human’s simply found some of the plays bizarre , yet somehow these strategies always successfully beat the players.

Still much of the gameplay and strategy mimic’d what actual Pro players would do.

Human players mentioned that Alpahstar discovered strategies that human pro’s never considered, and now can consider. All pro’s felt that they received a lot of value from the games.

1 Week of AlpahStar training is about 200 years of human years of training. That puts things in perspective. It took AlphaStar about 3 days of training to attain Intermediate level play.

Deepmind created an AlphStar StartCraft 2 League as part of its training. Adapting and refining subsequent generations of league winners as they played each other.

The actual running game runs on modest hardware just a plain PC with a GPU graphics card.

The actual training of the game takes more sophisticated hardware such as tensor flow hardware. They also had a custom binary from Blizzard that allowed them to play the game without the actual UI rendering for faster game play and faster updating of their AI model.

Deepmind posts blog article on how they created Alphastar

Watch replay of LiveCast

In conclusion it’s clear, much like AlphaGo, AlphaStar proved that a strong Machine Learning model can now beat the best real-time StarCraft 2 game players. Yet another form of gameplay falls to the machine.