Hassabis revealed in a blog post that AlphaGo is able to predict the human’s move an impressive 57 percent of the time. The previous research record was 44 percent.

He said AlphaGo is a hybrid of a heuristic search algorithm and “deep neural networks”.

“These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections,” Hassabis said.

“One neural network, the ‘policy network’, selects the next move to play. The other neural network, the ‘value network’, predicts the winner of the game.”

Hassabis said that the neural networks in AlphaGo were trained “on 30 million moves from games played by human experts” to improve the machine’s accuracy.

“But our goal is to beat the best human players, not just mimic them,” he said.

“To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.

“Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.”

The next challenge for AlphaGo comes in March when it takes on Lee Sedol in Seoul.

For Hassabis, AlphaGo’s achievements so far are already a victory but still a stepping stone to the big prize.

“We’re very excited but it’s just one rung on the ladder to solve artificial intelligence.”