Solving Go

A solved game is a game whose outcome (win / draw / loss ) can be correctly predicted from the start position when each side plays optimally.

Given the rules of any two-person game with a finite number of positions, one can always construct a minimax algorithm that would exhaustively traverse the game tree?. However, since for many games such an algorithm would require an infeasible amount of time to generate a move in a given position, a game is not considered to be solved weakly or strongly unless the algorithm can be run by existing hardware in a reasonable time. Many algorithms rely on a huge pre-generated database?, and are effectively nothing more than that.

Any two-player game without chance can be "solved" on several levels:

Ultra-weak solved

In the weakest sense, solving a game means proving whether the first player will win, lose, or draw from the initial position, given perfect play on both sides. This can be done in a purely theoretical way that may not actually help determine perfect play.

Weakly solved

More typically, solving a game means providing an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game.
The most complex game solved till now (January 2011) is probably checkers

Strongly solved

The strongest sense of solution requires an algorithm which can produce perfect play from any position, i.e. even if mistakes have already been made on one or both sides.

Whether a game is solved is not necessarily the same as whether it remains interesting for humans to play. Even a strongly solved game can still be interesting if the solution is too complex to be memorized; conversely, a weakly solved game may lose its attraction if the winning strategy is simple enough to remember (for example tic tac toe).

An ultra-weak solution (for example Hex on a sufficiently large board) generally does not affect playability.

Ultra-weak solving go

Go without komi can be ultra-weakly solved and is a win or draw for black.
Because if it was a win for white. (2nd player) Black could pass on his first move and then let White have the first move. (this is an illustration of the Strategy-stealing argument

But in normal go there is komi so this doesn't prove a bit. Also it doesn't help you to win the game.

Is go a draw?

The komi is to compensate White for not having the first move . If the compensation was correct and both players played optimally, then both players would end with the same number of points and so the game would end in a draw. This implies that the correct komi will be an integer (whole number without fractional part).

Traditional it was the case that draws/ jigo were seen as a win of one of the players, and this has resulted that after komi was introduced the komi was made to include a fractional part to prevent draws.

Weakly solving go

To weakly solve go is to provide an algorithm program that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game.
It is not necessary to play the best move in any situation (many situations are unreachable when one player plays optimal), or even to play the best move in every reachable situation. (if in a situation there are 4 moves and one leads to a 1/2 point and another to a win of 100 points choosing the 1/2 point win is good enough)

For 7x7, a group of strong players have, with help from the computer, tried to find perfect play. They claim to have looked at all reasonable variations, but of course not at all variations. Thus 7x7 is not truly solved in the way that is meant on this page. (not even ultra weakly solved]

Game tree complexity

Game tree complexity refers to the number of different moves that are available to the player at any time during the game.
it is defined as:
The game-tree complexity of a game is the number of leaf
nodes in the solution search tree of the initial position of the game

in layman's terms: "In every position there are many moves to consider times there are many moves to make."

Game state complexity

Decision complexity

Decision complexity refers to the complexity to decide if a given position is won/ draw or lost by the player. Is a group alive or dead. there is no easy formula to decide this. Playing it out is probably the only solution (and then you are back to the game tree complexity)

Compare this with chess where almost always if a player can capture the opponents Queen without losing his own he wil win the game.

Go is a divergent game

Games can be roughly divided in 2 categories: divergent and convergent.

Games where the number of possible positions decline (after the openings phase) are called converging,
This is the case in most games where the number of pieces declines during the game, most likely this is by capturing pieces of the opponent.
These games can be solved by making a database of all end positions and backtracking backwards create information over more complex positions. (retrograde analysis)

For example in Chess there are end game databases for about any combination of 7 pieces and less, and they are useful. They make the game in practical terms shorter and easier. (the state of a position (win draw loss) can be found in the database)

In divergent games the number of possible positions increases throughout the game, and trying to examine all possible end positions becomes almost useless.

In go, because the number of pieces increases during the game, so does the number of different positions. And making a list of all possible end positions becomes an almost useless occupation.
because trying to use it for backtracking (creating knowledge on earlier positions in the game) doesn't work.

Boards for which Go is solved

Spectrum article

The IEEE (Institute of Electrical and Electronic Engineers) publishes a journal called Spectrum. An October 2007, article titled Cracking Go by hardware and chip designer Feng-Hsiung Hsu PhD. that designed Deep Blue, the chess computer that beat World Chess Champion Garry Kasparov. Hsu stated in the article Nevertheless, I believe that a world-champion-level Go machine can be built within 10 years, based upon the same method of intensive analysis--brute force, basically--that Deep Blue employed for chess. In the article, Dr. Hsu does not discuss these characteristics, among others, that distinguish Go from Chess:

the lack of a whole board opening book. Opening books played a crucial role in Deep Blue.

the consequences of a local fight upon other fights and the global position. Chess is essentially a single local fight.

Note that the question 'how many legal positions' does not equal 'How many legal games'

R = ?

This is the most difficult to estimate. I welcome your thoughts.

For example, we could specify "only games with first moves in corners" or "only games with first moves in corners OR approaching other corners" These two specifications eliminate a huge number of games (and yet leave a ridiculous number)

We can't really say "only games matching a fuseki played by a pro" because pros could have failed to play the best fuseki yet.

Given these assumptions, the number of reasonable games is in the order of 10^200

Users

U = 10,000

As few as one thousand, but with the East, maybe many more. Maybe as many as fifty thousand, but probably more like ten thousand.

T = 3 hours / day

Some users will give 24 hours of CPU time. Some will only leave it on at night. Some will only run it as a screen-saver and will turn their computers off at night. Unfortunately, the last category will be the largest.

Discussion

Inadequacy of brute force

Just for the curious:

Suppose every particle in the universe (eg: sub atomic. electrons, quarks, etc. etc. Overestimate and let's say 10^90) was evaluating one board position every planck time (the smallest unit of time. Approximately 5x10^44 planck time per second)) for the entire life of the universe (~14 billion years, or 4x10^17 seconds).

How many board positions have been evaluated?

5x10^44 * 4x10^17 * 10^90 ~= 10^151 board positions.

Let's also calculate the maximum number of boards as 3^361. [3^(19*19)] That's 1.7x10^172.

So that means it would take the universe 10^21 more 14 billion year periods to evaluate all the board positions.

So solving go using brute force is simply impossible. An entire universe dedicated solely to this task could not do it.

symplicity: As a hobbyist programmer who has done some work in this sort of stuff, it pains me when people take the raw tree complexity or state-space size of a game and claim that it must take approximately that many steps of computation to solve the game. If that were the case, then Checkers or even Connect 4 would probably not have been solved yet.

From the point of view of game-tree complexity, only a square-root proportion of the game tree need be examined to prove the value of a game (alpha-beta pruning). From the point of view of the state space, it is likely that drastic orders of magnitude fewer than the total number of possible positions would need to be examined. It is absolutely unnecessary to examine even anything close to every possible position. Also, all the estimates on maximum theoretical game length (~10^48?) are all irrelevant, because it is exceedingly unlikely that any game too much longer than a few hundreds of moves is actually relevant to proving the game result.

On the flip side, positions may require multiple visits due to cases where the game history is relevant, that is, in ko. But we could special-case handle simple ko, and for longer cycles (superko), it's not unreasonable to think that for the vast majority of the branches in the game tree relevant to solving the game, the result will be independent of any superko rule that might be used.

Taking into account more Go-specific properties, a solver could drastically prune the search space in many other situations. For instance, in the oodles of branches where one side controls more than half the board, one can restrict the search to proving only that the territory is safe (and after the square-root reduction, the vast majority of the branching occurs as one player optimally refutes completely stupid play of the opponent, so this pruning could happen for a great proportion of the game tree). Also, certain classes of moves might be provably eliminated, such as filling in single-point eyes of groups with 2 or more eyes, etc, which could give a few orders of magnitude when applied everywhere in the tree. And so on, with other optimizations.

Of course, all this still puts 19x19 Go unbelievably far out of reach of solution. But even so, it's not nearly as simple as just counting the size of the game tree. In reality, the square root generally suffices, and we can do quite a bit better than even that.