The book is fascinating. I have always wondered how to apply machine learning to games, especially Go.

Sean Lindsay

During the personal computer era, AIs have overtaken humans at checkers, backgammon, chess, and almost all classic board games. But the ancient strategy game Go remained stubbornly out of reach for computers for decades. Then in 2016, Google DeepMind’s AlphaGo AI challenged 14-time world champion Lee Sedol and won four out of five games. The next revision of AlphaGo was completely out of reach for human players: it won 60 straight games, taking down just about every notable Go player in the process. AlphaGo was an incredible accomplishment for deep learning systems, and it's a fascinating story.

Deep Learning and the Game of Go opens up the world of deep learning and AI by teaching you to build your own Go-playing machine. You'll explore key deep learning ideas like neural networks and reinforcement learning and maybe even step up your Go game a notch or two. AI experts and Go enthusiasts Max Pumperla and Kevin Ferguson take you every step of the way as you build your Go bot and train it from eternal loser to hardened Go player.

5.4.5. Stochastic gradient descent for loss functions

5.4.6. Propagate gradients back through our network

5.5. Training a neural network step-by-step in Python

5.5.1. Neural network layers in Python

5.5.2. Activation layers in neural networks

5.5.3. Dense layers in Python as building block for feed-forward networks

5.5.4. Sequential neural networks with Python

5.5.5. Applying our network handwritten digit classification

5.6. Summary

6. Designing a neural network for Go data

6.1. Encoding a Go game position for neural networks

6.2. Generating tree search games as network training data

6.3. The Keras deep learning library

6.3.1. Keras design principles

6.3.2. Installing the Keras deep learning library

6.3.3. Running a familiar first example with Keras

6.3.4. Go move prediction with feed-forward neural networks in Keras

6.4. Analyzing space with convolutional networks

6.4.1. What convolutions do intuitively

6.4.2. Building convolutional neural networks with Keras

6.4.3. Reducing space with pooling layers

6.5. Predicting Go move probabilities

6.5.1. Using the softmax activation function in the last layer

6.5.2. Cross-entropy loss for classification problems

6.6. Building deeper networks with dropout and rectified linear units

6.6.1. Dropping neurons for regularization

6.6.2. The rectified linear unit activation function

6.7. Putting it all together for a stronger Go move prediction network

6.8. Summary

7. Learning from data: a deep learning bot

7.1. Importing Go game records

7.1.1. The SGF file format

7.1.2. Downloading and replaying Go game records from KGS

7.2. Preparing Go data for deep learning

7.2.1. Replaying a Go game from an SGF record

7.2.2. Building a Go data processor

7.2.3. Building a Go data generator to load data efficiently

7.2.4. Parallel Go data processing and generators

7.3. Training a deep learning model on human gameplay data

7.4. Building more realistic Go data encoders

7.5. Training efficiently with adaptive gradients

7.5.1. Decay and momentum in SGD

7.5.2. Optimizing neural networks with Adagrad

7.5.3. Refining adaptive gradients with Adadelta

7.6. Running your own experiments and evaluating performance

7.6.1. A guideline to testing architectures and hyperparameters

7.6.2. Evaluating performance metrics for training and test data

7.7. Summary

8. Deploying bots in the wild

8.1. Creating a move prediction agent from a deep neural network

8.2. Serving your Go bot to a web front-end

8.2.1. An end-to-end Go bot example

8.3. Training and deploying a Go bot in the cloud

8.4. Talking to other bots: the Go Text Protocol (GTP)

8.5. Competing against other bots locally

8.5.1. When a bot should pass or resign

8.5.2. Let your bot play against other Go programs

8.6. Deploying a Go bot at an online Go server

8.6.1. Registering a bot at the Online Go Server (OGS)

8.7. Summary

9. Enter deep reinforcement learning

9.1. The reinforcement learning cycle

9.2. What goes into experience?

9.3. Building an agent that can learn

9.3.1. Sampling from a probability distribution

9.3.2. Clipping a probability distribution

9.3.3. Initializing an agent

9.3.4. Loading and saving your agent from disk

9.3.5. Implementing move selection

9.4. Self-play: how a computer program practices

9.4.1. Representing experience data

9.4.2. Simulating games

9.5. Summary

10. Reinforcement learning with policy gradients

10.1. How random games can identify good decisions

10.2. Modifying neural network policies with gradient descent

10.3. Tips for training with self-play

10.3.1. Evaluating your progress

10.3.2. Measuring small differences in strength

10.3.3. Tuning a stochastic gradient descent optimizer

10.4. Summary

11. Reinforcement learning with value methods

11.1. Playing games with Q-learning

11.2. Q-learning with Keras

11.2.1. Building two-input networks in Keras

11.2.2. Implementing the ?-greedy policy with Keras

11.2.3. Training an action-value function

11.3. Summary

12. Reinforcement learning with actor-critic methods

12.1. Advantage tells you which decisions are important

12.1.1. What is advantage?

12.1.2. Calculating advantage during self-play

12.2. Designing a neural network for actor-critic learning

12.3. Playing games with an actor-critic agent

12.4. Training an actor-critic agent from experience data

12.5. Summary

Part 3: Bringing it all together

13. AlphaGo: Bringing it all together

13.1. Training deep neural networks for AlphaGo

13.1.1. Network architectures in AlphaGo

13.1.2. The AlphaGo board encoder

13.1.3. Training AlphaGo style policy networks

13.2. Bootstrapping self-play from policy networks

13.3. Deriving a value network from self-play data

13.4. Better search with policy and value networks

13.4.1. Using neural networks to improve Monte Carlo rollouts

13.4.2. Tree search with a combined value function

13.4.3. Implementing AlphaGo’s search algorithm

13.5. Practical considerations for training your own AlphaGo

13.6. Summary

14. AlphaGo Zero: Integrating tree search with reinforcement learning

14.1. Building a neural network for tree search

14.2. Tree search

14.2.1. Walking down the tree

14.2.2. Expanding the tree

14.2.3. Selecting a move

14.3. Training

14.4. Improving exploration with Dirichlet noise

14.5. Modern techniques for deeper neural networks

14.5.1. Batch normalization

14.5.2. Residual networks

14.6. Additional resources

14.7. Wrapping up

14.8. Summary

Appendixes

Appendix A: Mathematical foundations

A.1. Vectors, matrices and beyond: a linear algebra primer

A.1.1. Vectors: one-dimensional data

A.1.2. Matrices: two-dimensional data

A.1.3. Rank 3 tensors

A.1.4. Rank 4 tensors

A.2. Calculus in five minutes: derivatives and finding maxima

Appendix B: The backpropagation algorithm

B.1. A bit of notation

B.2. The backpropagation algorithm for feed-forward networks

B.3. Backpropagation for sequential neural networks

B.4. Backpropagation for neural networks in general

B.5. Computational challenges with backpropagation

Appendix C: Go programs and servers

C.1. Go programs

C.1.1. GNU Go

C.1.2. Pachi

C.2. Go servers

C.3. OGS

C.4. IGS

C.5. Tygem

Appendix D: Training and deploying bots using Amazon Web Services

D.1. Model training on AWS

D.2. Hosting a bot on AWS over HTTP

Appendix E: Submitting a bot to the Online Go Server (OGS)

E.1. Registering and activating your bot at OGS

E.2. Testing your OGS bot locally

E.3. Deploying your OGS bot on AWS

About the Technology

Go is an ancient strategy game. It's much simpler to learn than chess and at the same time infinitely harder to master because players have many more potential moves with each turn. (Chess has 20 possible opening moves. Go has 361!) It's nearly impossible to build a competent Go-playing machine using conventional programming techniques, let alone have it win. By applying advanced AI techniques, in particular deep learning and reinforcement learning, you can train your Go-bot in the rules and tactics of the game. Because deep learning systems get better the more they're used, you'll see it grow from perpetual loser to unbeatable strategist!

About the book

Deep Learning and the Game of Go teaches you how to apply the power of deep learning to complex human-flavored reasoning tasks by building a Go-playing AI. After exposing you to the foundations of machine and deep learning, you'll use Python to build a bot and then teach it the rules of the game. Everything you need to know about Go is covered, from how the game works, to checking for illegal moves, learning from losses, and implementing winning strategies.

With the rules down, you'll turn your bot into a master with the help of Keras and deep reinforcement learning. You'll see, in real-time, your bot become a better player as you apply new learning techniques and more complex strategies. You'll be amazed as your fledgeling AI arms itself with the skills it needs to win. Before long, you'll have a Go playing AI sure to beat you every time!

What's inside

Getting started with neural networks

Building your Go AI

Improving how your Go-bot plays and reacts

Reinforcement learning with actor-critic and value methods

About the reader

No deep learning experience required. All you need is high school-level math and basic Python skills. This book even teaches you how to play Go!

About the
authors

Max Pumperla is a Data Scientist and Engineer specializing in Deep Learning at the artificial intelligence company skymind.ai. He is the co-founder of the Deep Learning platform aetros.com. Kevin Ferguson has 18 years of experience in distributed systems and data science. He is a data scientist at Honor, and has experience at companies such as Google and Meebo. Together, Max and Kevin are co-authors of betago, one of very few open source Go bots, developed in Python.