Board & Card Games Stack Exchange is a question and answer site for people who like playing board games, designing board games or modifying the rules of existing board games. It's 100% free, no registration required.

Fairly recently, Computer Go programs became able to compete with humans using Monte-Carlo Search trees:

A Monte Carlo (MC) go program plays random games and easily evaluates the terminal position (win or loss). A MC program searches for moves that have high win rates, calculated from playing out at least a few hundred random games.

(This is a very simplified description. In practice one want to explore interesting branches with higher priority, so the randomness has to be controlled in some way using collected results. To play go at a reasonable level several millions of random games still have to be played out. But, hopefully it's quite fast to play randomly…)

I feel like a lot more games could be “addressed” using this method, and that could include maybe a vast majority of board games…

Two questions:

Has the Monte Carlo method already been applied to other games? (Are there concrete implementations available?)

For which kind of games, among those that have clearly stated rules and won't rely on complex communication between players, this method can be expected to fail?

For which kind of games, among those that have clearly stated rules and won't rely on complex communication between players, this method can be expected to fail?

Actually, Monte Carlo methods don't do very well with end games in general. You will find that many computer implementations that use Monte Carlo methods employ different algorithms when playing the endgame.

MAVEN uses B* search algorithms during the endgame (no more tile to draw).

MoGo uses 3x3 patterns to help identify "interesting" moves, and Upper Confidence applied to Trees (UCT) to help prune less optimal random moves from consideration and speed up searching.

While Monte Carlo methods are a useful tool in playing board games, they aren't a perfect solution. Computer Go programs using MCTS, are still unable to beat the best players on 19x19 boards without significant handicaps. Computer Bridge programs are no match for expert players. Monte Carlo methods will only approach perfect game with infinite time/space.

+1 with extra props for covering a range of games. I'm impressed Monte Carlo search worked well for TD-Gammon.
–
Alex PApr 13 '12 at 4:51

1

In all honesty, I don't know if Monte Carlo method is well defined enough that one could distinguish a MCTS algorithm, from a rollout algorithm, from a neural network, etc. I think the list of games could even be much longer, but the OP only wanted to know if they exist.
–
user1873Apr 13 '12 at 5:21

1

Where did you get that MoGo use databases of endgames? There is no such a thing in Go. You mean databases for opening moves (which indeed where added in some other MC AIs)?
–
Stéphane GimenezApr 13 '12 at 17:41

google.com/… Oops, you are right. MoGo uses 3x3 Patterns to make more interesting moves in general instead of the MC random approach, this isn't specifically for endgames. My overall point still stands, MC methods are not good at end games.
–
user1873Apr 14 '12 at 2:03

First, one needs to understand the differences between Chess and Go from a game complexity standpoint. Next, one must understand the differences between the two types of AI algorithms, and why one works for Chess and the other doesn't.

Both chess and Go are perfect information games with no stochastic elements. This means you can always see the full state of the game, and there is no chance or luck involved (no dice to roll, no unlucky card draws).

The difference between the games can thus be distilled to the number of possible moves each turn (# of decisions or choices) and the length of the game. I'm skipping a more formal analysis of search tree sizes, state space sizes, etc., in order to provide a more intuitive understanding of the differences.

For chess, the average number of possible moves at a typical point in the game is about 30 different moves. A typical game lasts about 40 moves. For Go, there are about 250 choices per move and the game lasts about 150 moves.

What this means is that there is much, much more evaluation of possible moves, far farther into the future with Go than with Chess.

However, what makes it possible for human to play is that in Go the pieces are all alike, we can target patterns instead of positions.

A traditional chess AI uses something called an Evaluation Function which uses shortcuts (like assigning more points for queens than for pawns) to evaluate a board and say that a state of the game is better or worse for one player than the other. With Go, it's very difficult to come up with an evaluation function, since the pieces are the same and territory is not set in stone until nearly the end of the game. This is why the traditional chess-AI approach to Go has failed, the larger decision and state spaces are impossible to prune, so the search is ineffective.

Now, let's flip the problem and see why using Monte Carlo on chess might be a bad idea. Since random play doesn't distinguish between 'good moves' and 'bad moves', a large majority of the random plays will be exploring obviously bad game trees. For example, sacrificing your queen for no advantage, or making it vulnerable and having the opponent not exploit it. Little information is gained by averaging in the results of individual bad moves. This type of AI will work, but it would require much more computation than the usual evaluative approach.

In short, much more of the winning/losing is evident by looking at the pieces in Chess than it is by Go.

So, why does Monte Carlo work for Go (i.e. can beat humans)? It works because both humans and computers are very bad at playing Go (compared to theoretical perfect play). Monte Carlo random play sampling works because no higher-level of understanding of patterns is necessary than can be obtained in a computationally efficient manner.

So, how do other board games compare? Let's ask ourselves these questions:

Can the board situation be easily evaluated?

How many decisions/moves are possible each turn?

How many decisions need to be made in a typical game?

How predictable are the outcomes of each move?

Most board games will show characteristics that make them far closer to chess than Go for the first 3 questions. This is highly suggestive that a chess style-AI will be easier, faster, and more effective.

Since #4 is different for both Chess and Go, let's see how it affects the two algorithms.

If we had a random factor, let's say, drawing a card from a stack of 60, each decision to draw a card will branch the game into 60 possible outcomes. Depending on what other players draw, their choices will change, which again puts us in the position of sampling many states are are highly unlikely or impossible (arguing against Monte Carlo). Meanwhile, an evaluation function can easily be written to give values to each card (combined with the state of the board).

I have not seen Monte Carlo applied to other board game AIs, and the above are some of the reasons why. Those are also the reasons why it's expected to fail. When it fails it can fail spectacularly in that it's doing boneheaded, obvious bad moves because the 'random' outcomes from the samples just happened to be irrational far down the future random choices.

I think the original question presupposes that you're using some kind of static evaluation function to prune the (incomplete) search tree used by the Monte Carlo method , just like you do with a more conventional search (per the clause about "interesting" branches). So the main issue for a game like chess becomes "How much does the AI's play deteriorate when you take a Monte Carlo sample instead of exploring all branches in a deterministic fashion?"
–
Alex PApr 12 '12 at 21:23

Sorry, I disagree with most of the things you claim, and your second link (the only one which seems relevant to this question) don't consider Monte Carlo search trees (see the last paragraph).
–
Stéphane GimenezApr 12 '12 at 21:42

1

@StéphaneGimenez Sorry you disagree. My claim is that MC Search is only necessary in Go because of the difficulty in creating a useful evaluation function. Most other board games do not have the complexity of Go, and are easy to write evaluation functions. Therefore, it's not necessary to create an MC-based AI, and doing so would be needlessly long and expensive to run (though they would be easier to 'write').
–
Neal TibrewalaApr 13 '12 at 2:13

@NealTibrewala That makes sense, maybe you could edit your answer so that the evaluation problems of Go come across as a main point. As is, you don't mention that until the 6th paragraph.
–
GregorApr 13 '12 at 16:10

In the last few years, several Monte-Carlo based techniques emerged in the ﬁeld of computer games. They have
already been applied successfully to many games, including POKER (Billings et al. 2002) and SCRABBLE (Sheppard 2002). Monte-Carlo Tree Search (MCTS), a Monte-Carlo
based technique that was ﬁrst established in 2006, is implemented in top-rated GO programs. These programs defeated
for the ﬁrst time professional GO players on the 9 × 9 board.
However, the technique is not speciﬁc to GO or classical-board games, but can be generalized easily to modern board-games or video games. Furthermore, its implementation is
quite straightforward. In the proposed demonstration, we
will illustrate that MCTS can be applied effectively to

classic board-games (such as GO)

modern board-games (such as SETTLERS OF CATAN), and

video games (such as the SPRING RTS game).

So, it seems that it was already applied to some other games. And they mention explicit implementations, but don't provide links to them.

I was present in the room for the first official 9x9 win agaist a pro by MoGo, and it was like 3 or 4 years ago. Now they are just 4 or 5 stones away from highest professionals in 19x19. In turn, one professional claimed once to be something like 3 stones away from perfect play (and from a go player point of view it is plausible).
–
Stéphane GimenezApr 12 '12 at 22:32

Some of this comes down to the definition of "fail". The MC Catan AI will play 'well', and may beat other AIs, but they claim it is only a "Challenging Opponent" implying it's still possible for humans to win. In general, any MC search requires that the good moves and bad moves are in a fairly reasonable ratio. Put another way, there's an upper bound on how good an MC-based AI can be, which a classical pruning AI can beat (in non-degenerate cases).
–
Neal TibrewalaApr 13 '12 at 2:19

2

@Neal: Of course it's still possible for humans to win Catan. Even if the computer could play absolutely perfectly, humans, even beginners, could beat it a decent percentage of the time. This is due to the large amount of chance inherent in Catan. The exact same goes for poker, bridge, cribbage, etc. The same is not true of perfect-information games like chess or go; a computer who played perfectly would always be able to guarantee a win/draw playing as the advantageous side.
–
BlueRaja - Danny PflughoeftApr 13 '12 at 17:21