On human and computer intelligence in chess (with solution!)

4/24/2017 – In March there was an international furore over a chess position, published by famous mathematics professor Sir Roger Penrose. It purported to show a key difference between human and computer thinking, and have general implications for our understanding of Artificial Intelligence. The example was unconvincing, but as a reaction a number of chess players and AI researchers have sent us papers we want to share with you. We start with a challenge to humans and machines issued by GM Miguel Illescas.

Everyone uses ChessBase, from the World Champion to the amateur next door. Start your personal success story with ChessBase 14 and enjoy your chess even more!

Along with the ChessBase 14 program you can access the Live Database of 8 million games, and receive three months of free ChesssBase Account Premium membership and all of our online apps! Have a look today!

In March there was an international furore over a chess position, published by mathematics professor Sir Roger Penrose, who gained world-wide renown in 1988 by working out black hole singularities together with Stephen Hawking. The chess problem was devised to defeat an artificially intelligent (AI) computer but be solvable for humans: “We plugged it into Fritz, the standard practice computer for chess players, which did three-quarters of a billion calculations, 20 moves ahead," explained James Tagg Co-Founder and Director of the Penrose Institute. "It says that one side or the other wins. But," Tagg continued, "the answer that it gives is wrong."

We reported on the Penrose problem, and confirmed that chess engines give a very high evaluation in favour of Black, who has a huge material advantage. But the pieces are all constricted and cannot be used against the lone white king, supported by four pawns. Humans will recognize that it is trivially easy for White to hold the draw, but computers display a +25-30 pawn advantage for Black – while defending perfectly with white! Not the best example of the difference between human and machine thinking.

I described a similar situation that occurred in a well-publicized game between IM David Levy and the computer program CHESS 4.8 in 1979. Levy had a queen and the computer a c-pawn on the seventh rank. It thought it was completely lost but defended the position perfectly to a draw.

In the first article I gave an example of a more relevant position, one in which there is a theoretical draw that humans can find, but where computer will play the wrong move and actually lose:

Here the correct first move for White, who is at an overwhelming material disadvantage, is to sacrifice even more material. It is the only way to secure a draw: White must play 1.Ba4+!, and after 1...Kxa4 play b3+, c4+, d5+, e6+ and finally f5 to completely lock up the position. This is a much more relevant test, as chess engines, playing the white side, will actually select the wrong strategy and lose the game – while in the Penrose position computers think that White is losing, but hold the draw without any problem.

After the above articles appeared I received a number of very interesting emails, which I intend to share with our readers over the next few weeks. They have to do mainly with the fortress theme and the way it is handled by computers. Today I will start with a communication from Miguel Illescas, a top Spanish GM and trained computer scientist, who worked for the Deep Blue team that beat World Champion Garry Kasparov in 1997. Miguel wrote:

"Several years ago I shared with my colleague [from the Deep Blue team] Murray Campbell an idea to make the computer 'understand' a fortress, but nobody has done it so far (I think). It looks simple to me: if the evaluation of several best moves is exactly the same and it stays stable it means there is no win. If that is true it shows computer programmers are not interested to solve something which is so unlikely to happen – or they are simply lazy!"

The Illescas Challenge

Together with this message I received a challenge which I pass on to our readers:

You are invited to analyse the position here on our news page, or show it to your favourite chess engine (FEN: 1r6/1n1R1b2/8/1p1p3k/pPpPp1p1/2P1P3/P2K1PP1/8 w - - 0 1) and solve it with machine assistance. The solution and analysis will be added to this article in a few days – after which I will share the thoughts and ideas of other AI scientist on chess positions that are difficult for computers to comprehend.

Frederic FriedelEditor-in-Chief of the ChessBase News Page. Studied Philosophy and Linguistics at the University of Hamburg and Oxford, graduating with a thesis on speech act theory and moral language. He started a university career but switched to science journalism, producing documentaries for German TV. In 1986 he co-founded ChessBase.

See also

12/24/2017 – He has written many books, but this may be his most important. In Deep Thinking Garry Kasparov describes a journey he embarked on, in 1985, that led him to a monumental confrontation with machine intelligence. At the same time computers enhanced his and all chess players' abilities to study the game. He also narrates the beginnings, which included a humiliating defeat at the hands of a German toddler in a computer game — quite an enchanting Christmas tale.

See also

12/22/2017 – Other chess reviewers have been at best dismissal and at worst harshly critical of The Secret of Chess, by Lyudmil Tsvetkov. However, according to GM David Smerdon, this book is a one of a kind work that legitimately has the potential to revolutionise how we think about chess. | Photo: Smerdon at the Tromsø Chess Olympiad, by Andreas Kontokanis CC BY-SA 2.0 via Wikimedia Commons

Video

The setup for White recommended by Valeri Lilov is solid and easy to play – the thematic moves are almost always the same ones: Nge2, 0-0, Bg5 (or Be3), Nd5, Qd2. Later, according to Black’s setup, things continue with f4 or even Rac1, b4 and play on the queenside. Starting with the classic Botvinnik-Spassky, Leiden 1970, the author describes this universally employable setup in 7 videos (+ intro and conclusion).

@FiddleSticksNZ: It seems that the knights are completely immobile and useless. The general solution would probably be to get Black to capture the white queen with one of his pawns (or the queen captures the black pawn instead) and then to break open the position with a white pawn moving forward.

FiddleSticksNZ 4/26/2017 12:32

albitex But is there a solution? Can the White Win? I could not find it.
At first time I thought had found the solution:
1. Kd1 Kc6 2. Ke1 Kb6 3. Rg2 Kc5 (3... hxg2? +#17) 4. Re2 Kd6 (4... dxe2? +#15) 5. Kf2 Kc7 6. Re1 Kc6 7. Rg1 Kb6 8. Rg2 Kc7 (8... hxg2? +#17) 9.Kg1 Kd7 10. Rd1 Ke7 11. Re2 Kd7 (11... dxe2? +#16) 12. Ree1 Kc6 13. Qa2 bxa2 14. b3 axb3 (14... cxb3 15. c4 a1=Q 16. Nc3+-) 15. a4 a1=Q 16. Na3 Qxa3 17. Ra1 Qb2 18. a5 +- 1-0.
But there is a refutation: But there is a refutation:(14... Kb5! 15. Rc1 a1=Q 16. b4 =) You had the right idea of sacrificing the queen on a2! but you needed to prepare it better. Move rooks to e1 and last rook to c2 when King is on d1 then you are ready for the Qa2! sac after moving Kc1. It is this position that fools computers that even the loose definition of a fortress that GM Illescas proposed will fail.

JoshuaVGreen 4/26/2017 01:40

I think the Paul Lamford position (shared by Zvi Mendlowitz) shows the real problem.
It's easy to "heuristically" argue that such positions are drawn, and it's fun to mock computer programs (and programmers) for not applying such rules, but how does one know that the heuristics are accurate? Knowing that Illescas's position is a problem that is claimed to be drawn, I can easily convince myself that 1. Rxb7 Rxb7 2. g3 holds the fort. I can even try some simple winning ideas for Black, determine how White foils them, and convince myself that "of course" Black doesn't have a way through. But what if I've missed a line? What if Black has some clever tempo maneuver to break through? My claim that "I know it's clearly drawn" rings a bit hollow since I can't rigorously prove it. Of course, I happen to be right about that one (I assume), but for the Lamford position similar arguments would (apparently) lead me astray.
To put it another way, a human is likely to evaluate both positions (Illescas's and Lamford's) as drawn while a computer is likely to evaluate both positions as winning. Each is right half the time, but it's not clear which error should be considered more severe. IMO, since actual fortresses are likely far rarer than "pseudo-fortresses," the computer is probably erring in the right direction.

albitex 4/26/2017 01:06

14...Kb5! instead of 14...axb3, and so all the my maneuver did not serve anything.

Reminds me of the following endgame, where the powerful side which is blocked behind the wall can actually win - but how?
Paul Lamford, 1981
8/8/8/1k3p2/p1p1pPp1/PpPpP1Pp/1P1P3P/QNK2NRR
White to play and win

WildKid 4/25/2017 11:21

bendictralph: You just need a set of rules or principles to follow in such positions in order to solve them (the same things a human player would look for)

You may think that's trivial - I don't. Any particular heuristic can obviously be coded, but to find a set of heuristics that would cover all, or the great majority of, cases is not, in my opinion. And the tree needs to be 'pruned because otherwise you are dealing with ((number of possible moves)**depth) alternatives, which will not work in 'fortress' positions. I think you're right that P equals? NP is probably lurking somewhere behind the question of whether this is generally solvable.

benedictralph 4/25/2017 10:49

@WildKid: The "doing" something about fortress positions is the easy part. You just need a set of rules or principles to follow in such positions in order to solve them (the same things a human player would look for). These can be coded into heuristics easily, but they need to outweigh typical material considerations in precisely those positions. Basically a set of new heuristics and associated weights *after* such a position has been identified. The computational stress is in identifying them which means by default assuming every position is potentially a fortress and checking for it. It does add up in terms of cycles required.

Why do you think pruning the game tree is necessary regardless? It's a depth versus breadth search problem. There are no "clever" ways to get around this without creating even bigger loopholes. It's been studied to death in AI. Unless computers can become "creative" somehow, which means they can sometimes get from point A to E without having to systematically go through B, C and D. Now *that* would help with relational databases too but it's a much larger problem than the solution to fortress positions in chess. It's bordering on the P vs NP issue if not precisely that.

WildKid 4/25/2017 10:33

bendictralph: one last comment: you say it's a relatively simple matter to detect fortress positions using piece mobility measures, and in principle I agree. In fact, I don't think such a measure would add measurably to execution time - we're talking a few hundred cycles of billions. The problem is not detecting a fortress setup, but DOING something about it - actually solving the problem that has been posed.

WildKid 4/25/2017 10:22

benedictralph: the similarity is that the potential logic chains are very long and have many branches - too many to explore one by one. And simplistic 'pruning' will not be effective, since the correct solution will be beyond the pruning depth. There are constraints in both cases, that, if recognized, could prune the tree very drastically and make it computationally navigable, but a computer might have trouble finding the simplifying 'insight' ('this lock is the one that is causing the whole thing to seize up, so if I can find a way to release the locks prior to it, the whole chain will eventually release'), even thought to a human being it would be quite obvious.

benedictralph 4/25/2017 09:52

By the way, you know what type of position would really demonstrate a "flaw" in chess engine design? A forced mate (e.g. 3-5 moves long) that a human could solve but a good engine couldn't. Or even one a human could solve more quickly. That's because computers are certainly coded to hell and back to find those.

benedictralph 4/25/2017 09:31

@WildKid: It's a relatively trivial matter in computing/AI to perform simple pattern recognition to detect 'fortress-like' positions in chess where average piece mobility is lower than would be expected given similar board density/clutter. The real reason we don't see this sort of thing coded into most engines is that it would probably consume an additional 1-5% of computing resources while addressing only 0.001% of positions. So it's a useful trade-off of sorts. As for application of this to relational databases, I highly doubt it as I just don't see the connection. Perhaps you'd like to illustrate clearly the link between a potentially novel method of solving fortress-like positions in chess and demonstrably improving relational database management.

WildKid 4/25/2017 08:30

benedictralph - As to the AI solution being 'trivial' - if that is so, why not outline it here?

As to these idea having no practical application: the logic of 'fortresses' in chess closely resembles that of complex write-access resource-locking situations in relational databases (SQL Server, mySQL, Oracle etc): a very practical problem to which current solutions are definitely suboptimal.

benedictralph 4/25/2017 07:02

An efficient "solution" to this in AI would be trivial, not requiring the introduction of anything that isn't already in the AI literature. It would also likely not have any applications beyond chess (in fact, beyond this type of unusual position in chess). So in science parlance, the problem is "not worth addressing". Chess players and problem composers can have fun with it, though.

WildKid 4/25/2017 05:46

'Fortress' type endgames are far from uncommon these days, and the fact that engines sometimes handle them suboptimally is a genuine and non-trivial deficiency.

It's also worth looking at some games of Tigran Petrosian (1929-84) who would deliberately set up hyper-closed 'fortress' type formations in the middle-game, and then win by extremely long and convoluted sequences many moves long that would finally open the fortress to his advantage. I suspect that engines might not always predict the best defense against this type of strategy.

Karbuncle 4/25/2017 04:44

Humanwise, the concept in the final puzzle is obvious in terms of what you need to do as white: sac on the knight, play g3, and then use the king to guard the h-file.

Enthusiast0309 4/25/2017 03:24

Sorry, my fault. The initial move is Rxb7, not Rxd7. The white king is already on d7.

Enthusiast0309 4/25/2017 03:22

In the Illescas problem, my stockfish iphone gives black -3.5 advantage in the initial position. In reality, however, white has a drawing line which gives him a fortress position. Starting with Rxd7 and if black recaptures with Rxd7, white can play g3 next to fix the position. After which, black can not use his two piece advantage to penetrate white's position. Black's bishop is useless, so is his King. He can try a3, then Ra8 to go to a4, then Rxb4. But the c pawn has nowhere to go as the white King can defend against this. So it should be a draw.

But the computer does not recognize this because it gives evaluations based on material advantage and due to certain limitations as to the depth of its calculations, it is possible that computers may miss the initial move Rxd7.

WildKid 4/25/2017 02:09

Ai techniques tend to divide into 'neat' (logic-based) and 'scruffy' (emulating the way the human brain works). Chess was the first major problem to fall to AI, using 'neat' techniques. However, most subsequent breakthroughs in AI have used 'scruffy' techniques such as neural networks (Watson, Go, etc). The world-championship-beating Go program, for instance, did not look as far ahead even as a human amateur (since the board is too big for exhaustive search to be effective), but imitated the human player behaviors that paid off, based on millions of games.

So, I wonder if it might be useful to apply neural networks, or other 'scruffy' techniques, to chess?

Why is this considered so interesting by ChessBase? Most chess engines are programmed to play "real chess" well, not to solve artificial compositions that will never occur in real games. It's not like computers *cannot* solve these problems - it's just not worth the effort for the programmers to code these exceptional situations just to give more accurate evaluations in say 0.00001% of the chess positions.

It's like saying that since the best football player cannot play chess, "humans" cannot play chess. No: chess engines are simply not aimed at solving these weird fortress positions.

charlesthegreat 4/25/2017 12:52

you forget that after 3. Ke2 a3! and black can try the idea Ra8-Ra4-Rxb4 idea to get a passed c pawn. Even then, the rught King moves can thwart this plan too.

spieler8 4/25/2017 12:44

> It looks simple to me: if the evaluation of several best moves is exactly the same and it stays stable it means there is no win. If that is
> true it shows computer programmers are not interested to solve something which is so unlikely to happen – or they are simply
> lazy!"

I cannot believe he is a trained computer scientist, because otherwise he wouldn't say such stupid things...

albitex 4/25/2017 12:26

I believe:
1. Rxb7 Rxb7 (1... Rh8 2. Rxf7 Kg6 3. Rb7=) 2. g3 Rb8 3. Ke2 Kg6 4. Kf1 Rh8 5. Kg2 And Black no longer enter, fortress; Can not do anything with its material advantage. After the 1. Rxb7 move, Stockfish assigns a net advantage to Black (-9.0?). This is normal, fortresses are the last frontier of the engines.

Keith Homeyard 4/24/2017 11:01

Pardon me Sagitta but perhaps you may be deviating from the point.
I think many humans will find this trivial and maybe amusing. As I understand it this is a test of whether a silicon based engine can solve it.