The relatively new release of the Stockfish engine as the Brainfish engine paired with a strong book has an appeal , especially if the book is tuned for the engine. I am not sure that is actually the case, however the Cerebellum book is undoubtedly enhancing the engine's performance but the question is, is it any better than another well tuned book would accomplish?

In carrying out the following match I had intended using my own book but with difficulties getting either of the Fritz GUI 13 or GUI 14 to load two versions of the same engine I managed to allocate the Vesely opening book dating back to January 2011 and unfortunately did not pick up on the mistake until I started analysing the first 80 games. Incidentally, the Fritz engine load issue was down to poor programming where the GUI programmers only compare if the first engine's string appears in the second engine's string taking no account of string length difference. Because I had added the suffix _NB to the engine running the Vesely book, I found the GUI did accept both if I loaded the longer string name as the first engine! Yet another example of sloppy and careless GUI programming!

Although I had always considered the Vesely book as one of the strongest publicly available at the time, as the game progressed it looked as if the Cerebellum book was highlighting the fact that lines had been updated since 2010 when by game 80 it was running to the good by +14/=65/-1 +58 Elo with the solitary win for the Vesely book looking to be more as a consequence of two bad moves by the other engine approaching the first time control.

In contrast, by the time I had to stop the match after 120 games the differential was down to 26 Elo and in the last 40 games the engine + Vesely book scored to its favour +11/=22/-7 +35 Elo so quite possibly the engine + Vesely book may have turned it around if I had been able to run this through to 160 games.

Initial conclusion though is that the 5.5 year old Vesely book gave the Cerebellum book a good contest and perhaps the Cerebellum book may not look so good in a longer match against an up to date alternative book that can be tuned for results as the match progresses. Unclear that the concept of the Cerebellum book is any improvement to what can already be achieved.

> Initial conclusion though is that the 5.5 year old Vesely book gave the Cerebellum book a good contest and perhaps the Cerebellum book may not look so good in a longer match against an up to date alternative book that can be tuned for results as the match progresses. Unclear that the concept of the Cerebellum book is any improvement to what can already be achieved.

I don't think it's better than the ones being updated, either.

The concept (minimaxing) may be an improvement over stats, that remains to be seen, but because it has "Fish" in the name, everyone is running around like headless chicken.

Is Vesely's book still downloadable? I have a copy, several I believe, but I remember it took very long to download. It wasn't optimized for engines, so I'm not sure it was such a good opponent for the match? It is probably too large to put it here as a file, but maybe on RybkaChess.com, but it would be necessary to have permission from the author.

But it is very quiet in here, and I am afraid the few of you that are here, you are missing the really big news; Jeroen has made an update of his opening book! Check out the subforum, or Talkchess, or Ed's website. It is pretty large also. If I had to choose to spend my money on this or on the Cerebellum, I would go to the former. Because it feels to me as a real book. Of course you can combine any book with Brainfish too... We currently have 50 guests on Talkchess, I don't know if they are all just downloading Jeroen's openingbook, without saying a word of thanks...

No, I don't agree, Brainfish/Cerebellum is much stronger.I tested itI ran Brainfish/Cerebellum on 1 computer against Asmfish/Noomen.ctg on my 2nd computer, in a French Defence.When the Engines came out of Book, Brainfish/Cerebellum had a big +0.60 lead over the Noomen Book.Such a big lead is disastrous in online engine-engine matches.Cerebellum wins, no contest.

Funny ? Yes, I forced it to play the French Defence.I like the French defense, and if this book is so good, it should be able to handle it.So, sad, I think is the more appropriate word, not 'funny'.On the other hand, Cerebellum has handled anything I threw at it.

Well, there you go. The defences in my book are Ruy Lopez, Sicilian and Caro-Kann. It avoids the French and not without reason. So testing a line that is not actively played, is rather useless, because the book will never play that on its own and it is not optimised to use this defence as black.

46 draws and some of them coming very quickly. Noted game 2 was a book draw without a single move being played by either engine.

Of the 4 wins I have 3 down to engine mistakes with the only immediately identifiable book influence being in game 49 when as Black, the Noomen.ctg book outbooked Cerebellum and pointed the engine in the right direction giving it a -1.78 score in its favour with the indication that the Cerebellum book may be in error here with Black's 16..0-0 having a good track record. Perhaps an example where Stockfish tuning goes awry ...

Games 51-100 saw 4 wins without reply for the Cerebellum book. As per previous comment there seemed nothing immediately winning at the book departure points but there may be more subtle attributes in the positions that lead to wins much later in the game. However with the very low win rate being maintained at just 8%, apart from game 49 the wins may just be the result of SMP variability. The high number of draws suggest there is not too much difference in the books' capabilities but the question is; is it possible to tune the books to provide more winning opportunities without weakening?

I'll run this through to game 150 and run the same match using Komodo 10.1 with the Noomen book. I suspect Komodo 10.1 is a little way behind the latest Stockfish engines but it will be an interesting control to check out if there any issues with the Stockfish book tuning. It would perhaps be of more interest if there was an up to date book tuned specifically for the Komodo engine that would really test the Cerebellum tuning method.

Seems strange that the "SMP variablity" seems to favor the much-criticized Cerebellum.No offence, but I can't help but wondering if the factor of SMP variabilty would have been raised if the " 4 wins without reply" had been in Noomen's favour !

You are reading too much into it. The main point of this exercise was to identify if the books offered any killer lines. The results speak for themselves; they do not but show themselves to be well balanced but tending towards draws.

With bigger hardware resource allocated to each engine and longer time controls the draw rate would likely increase further when SMP variability would be of less influence.

As with any non-GUI dependant book, there is some merit in the Cerebellum book because it can be used in any GUI supporting UCI engines. However there is no flexibility because the end user is stuck with what it provides and currently it cannot be used for other engines. I do not see this as a new concept at all.

The Chessbase book system gives the end user many more options; for example the facility to modify the book lines, introduce and delete moves without having to build a new book as well as marking moves specifically to be played with weightings and the option of different settings for learning and variety. The Chessbase book can be used with any engine but the main drawback is use outside of the Chessbase GUI's is limited and in most cases not possible.

I talked to the Author of Cerebellum and he confirmed that Cerebellum can be used with any Engine in the final, commercial version which will be released in November.So that takes care of that particular point.

As per previous comment there seemed nothing immediately winning at the book departure points but there may be more subtle attributes in the positions that lead to wins much later in the game.

I notice that the Cerebellum book sometimes stays in book for over 40-45 moves, when the position is still +0.15 according to SF, but objectively totally drawn. IMO it is pointless to have such enormous long lines in the book. I prefer to have shorter lines, accepting a -0.15 with black and having the opportunity to find something new later on.

The high number of draws suggest there is not too much difference in the books' capabilities but the question is; is it possible to tune the books to provide more winning opportunities without weakening?

After some testing and tweaking my second match ended 2-2 with the rest draws. The problem is that the Cerebellum book doesn't seem to have much variety. The lines it plays are very, very solid and can hardly be beaten without taking too much risk. It gets a bit boring, actually. I see countless times the same Italian game, the same Najdorf line, the same Sveshnikov line and the same line against the Caro Kann. I am not very keen on a rat race doing countless improvements at move 40 to 45 to avoid a loss. And I prefer to test against books with a lot more variety.

I'll run this through to game 150 and run the same match using Komodo 10.1 with the Noomen book. I suspect Komodo 10.1 is a little way behind the latest Stockfish engines but it will be an interesting control to check out if there any issues with the Stockfish book tuning.

Thanks for testing. Indeed it will be interesting to see how a match vs Komodo will go. Looking forward to this match.

The Cerebellum book does seem very narrow and very deep. It would be worse if the 2nd move value was turned down to zero. I selected a value of 50 that I hoped would give reasonable variability but it does seem to be mostly down to the opponent to provide the variety. One benefit of going so deep is that in faster time control games it can gain a significant time advantage but here, using the same engine for both books there was no measurable benefit for either set up when one outbooked the other.

Given the Cerebellum book is labelled _Light it leads one to believe there is a _Full version and having whetted people's appetites I wonder if the intention is for the _Full version to be commercial. It may have some appeal for some but I suspect the serious users prefer their own books.

Games 101-150 saw just 3 more wins; 2-1 in favour of the Cerebellum book giving a final score of 7-4 to the Cerebellum book with 139 draws.

Game 103 was a Giuoco Piano with the Noomen book as Black exhausted with last move as early as move 8 and the Cerebellum book continued to move 14.cxd4 when after Blacks reply of ..c5, White scored an advantage of 0.62 for 15.a5 and with still much work to do it nevertheless did not look back. This could be down to book advantage but unclear.

The second win as White, this time for the Noomen Book came in game 130 and was a consequence of an old Stockfish problem rearing its head again. A sequence of 0.00 scores, here for both engines, but after Black's 28..Ra5 0.00/28, White's 29.h6 scored it 0.59/30 and went on to win. This 0.00 issue with the Stockfish engine seems to surround possible draw scenarios that cause it to miss key moves. Checkout Black's moves 21 to 28 all scoring 0.00.

The third win for White and the second for the Cerebellum book saw the Noomen Book out booked by 7 moves. Despite the engines being the same check out the score differential after Black's 45..Kg7. Again something looks to be wrong here with the engine as Black being 1.7 at odds with White's score despite indicating 11 ply deeper search.

After 150 games, other than game 49 and possibly game 103, there was little measurable evidence to suggest the books had much influence in the wins with some indicators that all was not well with BrainFish (Stockfish) scoring in some of the games that suggested it was going astray.

I just checked the Book Move2 Probability and confirm the setting was 50 because ...

of the 35 games beginning 1.e4 where the Cerebellum book had Black it only responded with 1..e5. Not one Sicilian at all!

Of the 28 games opening 1.d4, as Black Cerebellum played 3 replies with Nf6 and the remainder with d5.

Thanks a lot for the test, Peter. I have addressed the early book leave issues, I also had seen them in my own test. In a sense the Cerebellum book is excellent to find 'black holes' in ones own book. So I will do more tweaking and testing until all losses are gone. The 'bad' thing will be, that future matches between those two books might end in 100% draws....

> The 'bad' thing will be, that future matches between those two books might end in 100% draws....

Unfortunately the effort put in to find wins is rarely rewarded by that success lasting too long once the game(s) become public. The problem then however becomes the same as seen in today's correspondence chess with the high number of draws as a consequence of players using computers and opening databases.

The prediciton of the death of chess when all games end in a draw comes ever closer but there is still mileage perhaps using different engines and perhaps real rather than artificial intelligence? The general trend with engines has been towards moving away from positions creating tension to early simplification and I have always seen the opening book as a vehicle to force engines into those more interesting positions but it will require the artistic nature of the human mind to continue to create those lines leading to those positions.

The prediciton of the death of chess when all games end in a draw comes ever closer

In engine-engine games this might be possible. I am not afraid for us humans, though. Our tendency to make mistakes keeps the game interesting.

BTW, there is more in chess computer life than Cerebellum. My book is not intended for quick SF-SF games only. I refuse to skip the Ruy Lopez because that always leads to a draw. And I will refrain from playing lesser known lines only, to boost the results in engine matches. My target is a broader audience and thus the book should simply play all the popular openings.

thanks for all the tests, I will try to comment on the remarks and questions raised in this topic, reading all this was very interesting for me.I was not sure where to put this, but as Jeroen seems to be the one challenged, I will answer him directly.

First refering to your comment "there is more in chess computer life than Cerebellum", that is only true for Cerebellum_light. The full version covers the same broader audience as your book, which I hope I'm able to explain in my post here.

The Cerebellum_light book is an export of the unpublished "Cerebellum full" book, which is mainly a tool for the deep analysis of chess postions and opening lines, intended to be used for example in Correspondence Chess and in the opening preparation of grandmasters. The full book offers most of the functions available in standard opening books, like statistics and manually selecting preferred lines. And of course all the additional Cerebellum functions. It can handle two engine caclulated graphs/trees together. You can see a part of his data on the "Demo" tab of the Brainfish website, which tooks some time to load. It is also part of a complete new chess Gui.

It can be very interesting to dig around in the Cerebellum book, searching for new lines and connections, calulating and adding new positions, recalculating the whole graph and look what has changed. It is possible and already occured several times, that adding an endgame position changed what is played in the first moves. The library is very broad and covers every aspect of chess openings, also rarely played Gambits.

Some time ago I tried to use Cerebellum for playing in engine chess too and developed a proxy which can connect the full book with any engine. The playing part was not so easy as it seems, because you have to handle many things like repeated postions, moves with equal scores but different depth in the graph and so on.

After succesfully playing for about a year on sites likes Infinity and Playchess, with several tournament victories and a lot of second and third places against strong opponents, I decided to develop Brainfish as a free implementation of the Cerebellum library intended for the use in engine chess, but without the analysing and statistics functions of the full library. The main reason for Brainfish is to get feedback from users to improve the library, to see how it behaves in competition, and of course a bit advertising for the full library. So far I'm very satisfied with the results.

Now to some of the issues mentioned here:

1. The Cerebellum book repeats loosing lines

With the concept of adding new positions only through the calculation of that positions and then performing a recalculation the whole graph, it is not possibly to do that on the fly. In engine chess that is not really a weakness, because practical experience shows that you normally don't loose two times in the same way in a sequence of about let's say 80 games, in fact you normally loose only about 1-2 games out of 100. So with the full library you can always recalculate your book with all the games you played, including the lost ones of course. For now some users of Cerebellum are sending me their games, and they are very satisfied. The book is updated about 3 times a week. Recent reports showed results in practical playing like for example +12 =120 -2.

2. The very narrow and very deep book.

It was a decision to let the book play only the strongest moves for the first release. I will keep that some time till I made some experience and then try to broaden to book moves, leading to a more flexible play. In the full version you can configure the book play exactly how you want, like in a standard opening book. In Cerebellum light I manually ruled out only the Marshal Gambit, which is a bit overestimated for white by Stockfish, and temporarily in the last 3 versions Sicilian, till I finished some bigger calculations.

3.) Book Matches

I think one single book match covers only a part of a books strength. A main aspect is how fast a book can be adapted to new lines, lost games or in general the progress of opening theory in chess and engine chess. With Cerebellum (full) that is very easy, new games are added automatically with all their calculated positions, and then you really know why a line is loosing, maybe the line was not bad at all, and only one move was faulty. In general I need about 200 games to "absorb" a good book, from that on it only wins occasional, and the weaknesses it may have exposed in Cerebellum are gone.

4.) Many draws and long lines without winning chances.

The concept for the Cerebellum_light release was to have a book which is completely calculated by Stockfish itself, without human intervention. So the name Brainfish is justified because Stockfish alone is playing, with a "Brain" which created new knowledge out of Stockfish calculated positions only with the graph recalculation of all scores. With that concept I was successfully playing engine tournaments over a year, and other users are successfull too. But with the improvement of Stockfish in the last month I saw an increase of the drawing rate not only in the book, but in Stockfish vs Stockfish games too. That seems to be a general problem, maybe we are really facing a draw problem regarding the best engines playing with good books, endgame tables and on fast computers. Of course I can manually change some lines, but I would like to do that automatically, and there already some ideas, like using Komodo evaluations in addition to the Stockfish ones. And finally using statistics deciding between moves with a similar score is a second possibility.

If you have questions please simply ask, my english is not the very best, so sometimes I'm not as clear in what I try to explain as I would like to be.

Thank you very much for your detailed reply. By the way, your English is very good :-).

Concerning my reaction that "there is more in chess computer life than Cerebellum" I was referring to my own book. What I meant to say was, that I don't want to concentrate on one book only, to make sure my own book scores well against that book. F.e. I won't skip the Ruy Lopez, because my book only score draws with it against the Cerebellum book. When I see an interesting idea played by a top GM, I might include it in my book, even if objectively speaking it could lead to a draw. IMO they will be interested to see this new idea in a book, it gives them the opportunity to see what the top engines think of this line. That is more important for me than the fact that it might hurt performance a little bit. Finally, I am not that interested in optimising for engine-engine blitz matches. That has never been my main objective.

In any case, I very much like the Cerebellum book concept. I know that in checkers the strongest books are completely computer generated, using the biggest endgame databases. That approach leads to unbeatable books of superb quality. In chess, however, I am of the opinion that the view of a strong human player is necessary to avoid certain lines to be played. I have seen quite a lot of lines where Stockfish thinks white has a small/clear advantage, but I simply disagree with that evaluation.

Match conditions as previous. Komodo 10.1 contempt parameter set = 0. There was no date stamp change to the Brainfish engine or the Cerebellum book after the previous match indicating no consequential changes therefore I saw no need to overwrite them.

The previous match had established there were very maybe a couple if any killer lines. Both books are strong in their own right therefore the purpose of this match was to identify whether the Komodo 10.1 engine with Noomen Book would provide a different performance.

Games 1 to 50 highlighted some more salient points regarding narrow books.

Komodo 10.1 + Noomen Book went into an early 2-0 lead after winning game 10 but by game 20 BrainFish + Cerebellum had pulled it back including a win with Black. The early 20% win rate looked promising but had dropped back to 14% by game 50.

Komodo's first win came in game 6 in a Nimzo Indian with Cerebellum exhausted after 9..Qa5 with the Noomen book continuing to move 16. The scores looked marginally in favour for Komodo until BrainFish played 29..f4 after which the game moved in Komodo's favour.

Black wins are always interesting and BrainFish' win in game 16 came with early departures from both books after the Noomen Book's 8.Na3 that looked to be an added move to the book that may not be best when the scores indicated BrainFish quickly pulling the score through equal to advantage for itself. Perhaps 8.Na3 works with deep analysis but not so under these match time controls and allocated engine resources. Potential risk of further losses with no book alternative.

In these first 50 games there were 8 games exactly the same. Games 20, 22, 30, 36, 40, 42, 44 and 48 all giving the same 26 move draws in a C90 Ruy Lopez opening with the White line being played by the Komodo engine. The BrainFish engine did not play any moves with them all coming from the Cerebellum book. The Noomen Book managed to move away from this in game 50 and obtained a win to pull the score back to 4-3 in favour of BrainFish + Cerebellum book. It will be interesting to see if the Cerebellum book has an answer to this in the remaining games to be played.

Some comments: this Ruy line that was repeated several times, can lead to interesting play if black plays differently, but Cerebellum's line is just a draw. So I changed this line. The Na3!? novelty in the Giuoco Piano is inspired by Anand, who used these Na3 ideas in the Candidates tournament. His idea, however, didn't get any followers, so I guess it was a one time inspiration. In later games GM's simply kept on playing Nbd2, which seems more logical and better.

In any case, your matches give me a lot of room for improvements. Very nice :-).

Games 51 to 100 saw BrainFish + Cerebellum increase its lead over Komodo 10.1 + Noomen.ctg with a score of 7-3 with 40 draws.

The immediate point of interest here was whether the Cerebellum book had any answer to the line played in game 50. It did not, losing games 52, 54 and 56 with Black when as expected the Noomen Book continued to play the same line. So what happened in game 58? Move 13 is what happened when already out of book, Komodo played 13.Bc4 instead of the previously played 13.h3 and went on to lose. It had to be move 13!

In case Mr. Shrapnel reads this, this is a prime example of the consequence of SMP variability that in this case completely reversed the result. Further consequence was that after 2 drawn games with Queens Pawn openings the book reverted back to the C90 26 move draw after Which it changed to the alternative 8.c3 giving a C89 line and that draw kicked it out of 1.e4 until game 76. The winning C90 line seems out of favour for now because of that loss.

After that initial flurry of winning activity, two further wins came from Brainfish with the Cerebellum book as White against the Caro-Kaan in games 69 and 83. After two earlier draws with 9.Qe2, the Cerebellum book changed to 9.0-0 but as can be seen it looked to be just a transposition but was sufficient to throw the Noomen book off guard gaining the extra half point.

Just as the Komodo 10.1 + Noomen book combination had a 4 win flourish from game 50, the BrainFish + Cerebellum combination had 3 succesive wins with White in games 95, 97 and 99 with game 99 being another Caro-Kaan loss for the Komodo + Noomen book.

Game 95 was a QGD that saw Komodo go downhill fairly quickly after exiting book after move 12.

The game 97 win suggested Komodo was lost for the correct move with the early departure from book in a Giuoco Pianissimo when perhaps 8..Bg4 should have received a green move mark to avoid the dubious 8..g5 played by Komodo.

Interestingly, Komodo often disagrees with the SF eval. Where SF says "clear advantage", Komodo's eval is close to 0.00.

When I have fixed the weaker lines in my book, it would be interesting to repeat the match.

BTW, what puzzles me, is that I have 3.Bc4 as an alternative to the Ruy, but the book never seems to play this, even after a huge number of draws in the Marshall gambit. I don't know why the CB software doesn't switch to 3.Bc4 at some point.

> When I have fixed the weaker lines in my book, it would be interesting to repeat the match.

I'll certainly consider running that if you wish although the weather has warmed up here again so I was glad the match finished this morning because the dual Xeon machine does generate much heat.

Please also satisfy yourself that the wins and losses were not just consequential to the faster time control used that is close to 40 moves in 15 minutes for CCRL and CEGT references.

I used the match games as the database for the book to learn from Komodo 10.1 64-bit. The "Prob" column shows that the likelihood of 3.Bc4 being played was very slim from the outset and further reduced based on the match results that is indicated in the [%] column. Check game 16 when it lost with 3.Bc4 and later drew in game 80. The probability of the line being played looks consistent with 2 games in the 75 played as White here. Bear in mind I used a fresh copy of the book for the second match so there was no learning influence from the previous match.

Indeed it is getting a lot hotter, same over here. The coming 5 days we have 30 degrees centigrade or more.

One of my testers asked me to do a retest, he is currently running it (albeit on a much slower machine). I'll post the result later.

The most interesting fact from the Komodo match is that you'll often see a clear discrepancy in evaluation. A Brainfish-SF match might not put the finger on the weak spots in SF's eval, hence this second match was very interesting. Furthermore, Komodo might play some lines better than SF and vice versa. An example is the Caro-Kann, where Komodo badly goes astray. SF finds the correct line of play quite quickly.

In games 101 to 150 BrainFish + Cerebellum increased its lead with 5 wins, 2 losses with 43 draws for an overall margin of 8 wins with a score of +16 =128 -8. Win rate = 16%

Game 103 saw BrainFish win in a Giuoco Piano with both books exiting by move 9 and BrainFish gaining a spatial advantage allowing better mobility of pieces behind its pawn wall to achieve a very nice attack on Komodo's King defences.

Game 107 and 141 was the fourth and fifth loss by Komodo in the same Caro-Kaan line.

Game 122 gave Komodo a win in a Nimzo-Indian, Samisch variation that both sides scored even around the first time control and with opposite coloured Bishops but with both rooks still on the board it looked to be heading towards a draw but here it is worth looking at the subtlety of Komodo's play to get the win. Still not sure I fully understand it!

Game 128 gave Komodo its last win in a QGD decline 5.Bf4 variation. The Noomen book provided Komodo with an extra 6 moves. There was not much in it after books were exhausted but perhaps the sequence of 0.00 scores by BrainFish to 29..Re7 suggested it may have missed something that Komodo was able to work on.

Game 143 saw a rare Sicilian and it appears the addition of 12..Be7 to the Noomen book may not be so good when 12..Nc5 may be better. BrainFish showed a score of +1.13 out of book for 13.a3 and Komodo did not look comfortable after the Noomen book's 14..d5. BrainFish went on to win.

Both matches provided some interesting games and the high number of draws indicated there is not too much between the books. Important to point out that different engines are likely to get different results but it does not necessarily mean the losing lines were bad when perhaps it is just the case under these match conditions the engines are unable to provide better play with the lines.

There has obviously been much work put into these books for our enjoyment so it just leaves me to thank Jeroen Noomen and Stephan and Thomas Zipproth for their work when I hope they see any comments I have made as positive rather than negative criticism.

Thanks a lot for the effort. All losses are in lines that were indeed not optimally checked in my book. And I also found a few unintended priorities, like the quick Sicilian loss. The Caro-Kann losses are rather strange, it appears that Komodo doesn't know the correct line of play. In my own tests this Caro-Kann line now always leads to a draw.

The past three days I have been patching the suboptimal lines and I have asked one of my testers to do another test in the same setting.

thanks a lot for your efforts too. Without testing and feedback progress would not be possible.I'm updating the Cerebellum book too, let's see how things develop.I will reactivate Sicilian when the calculations are finished, it still needs some days I think.

Looking at some of the games I wonder if Komodo has more SMP variability than other engines? After a draw and 2 wins in the C54 Giuoco line in games 8,16 and 20 then came the loss in game 24 when it changed first move out of book from 16.Qe1 to Qe2 similar to what happened in the B90 Ruy Lopez games in my match. In analysis on my older O/C'd quad with 4 Gb hash and 256 Mb Table memory, it also considers 16.Ba4. I also noted the 0.00 evaluation my have had some influence on both engines after move 16 when for example in game 24 Komodo had 26.c4 at 0.00 but BrainFish' reply was scored at 26..Rg5 -0.96/24.

Stockfish 190816 shows less variability sticking with 16.Qe2 albeit with a 0.00 score up to 28 ply but then has a very long "think" before changing to 16.d4 at 29 ply but the strange thing is the score stays at 0.00 so no indication why it changed.

Cannot help but wonder then on the reliability of the engines' analysis!?

I am quite surprised that Komodo scores so well with my book, playing against Brainfish + Cerebellum. After some more tweaks, a second match ended 5-2 in favour of Komodo, with 43 draws. The total score after 100 games is thus 10-5, with 85 draws (52.5-47.5).

My move selections were made by using SF as an analysis engine, so I thought it would be more or less a "Stockfish book". But Komodo seems to like it as well :-).

A third match was aborted after 13 games, when Komodo+Noomenbeta.ctg was leading 6-0 (with 7 draws) against Brainfish. The latter stumbled into a bad line and kept repeating it 5 times, losing all 6 games. It seems to me that the light version of Cerebellum sometimes doesn't have alternatives available when a line fails.

It is getting time I buy Komodo. It seems excellent in pointing out errors in SF's eval.

That's interesting, I never saw such results in practical play, but as I pointed out, Cerebellum can only learn with calculating the lost games and adding it to the graph. It cannot adjust himself during play, because it does not use statistics on the fly.

That means of course, if you detect a loosing line in Cerebellum, and let your book adjust to it, you can also score 1000-0 against Cerebellum, but that does not mean a serious issue, except of course, that the variability of the played moves of Cerebellum is a bit too low at the moment. The next step in Cerebellum would be to play more different lines, which means to select also moves with a lower score than the best one, and to find a good selection method for that moves, perhaps a combination of score and statistics.

The main issue here is I think, that you cannot change or view Cerebellum light yourself, I hope to make progress with the full version soon.

Recent results from Playchess still showed results like +12, = 80, -1, so that issue seems not to happen in real competitions, where the library is updated about every second day.

I also never saw such a result when testing against other books, but I normally recalculate Cerebellum after each run of about 100 games.

Of course that is an ongoing process, and the updated book I released today may show different results. At the moment I have no free computer for making tests with your book, but it would be an interesting competition to see how the different methods behave. Perhaps you can release your updated book sometimes, so that someone can make new tests.

That was one of the draw backs that I picked up in the very first match I posted in this thread. Everything seemed to be going well for the BrainFish + Cerebellum combination against the same BrainFish engine without Cerebellum but paired with the 5 year old Vesely.ctg openings book.

Game 83 saw BrainFish + Cerebellum lose with Black in a B97 Sicilian Poisoned Pawn variation and having identified that, the Vesely.ctg book registered 11.5 - 0.5 as White with that opening in the latter part of the game. Therefore in that match it took just 83 games for exactly that situation to arise. I commented that the Chessbase book learning capability to reduce the probability of poor scoring lines in favour of better scoring lines during a match is a significant feature that gave it an advantage over the Cerebellum_Light book in that particular match.

Yes, I see that this can be a problem in testing. What I do for now is that I calculate and add every lost game that is published to the Cerebellum book. That is an automated process with not much work involved. So for example the problem with the Vesely book should have gone, but I had not the time to test it.

Every new book release is recalculated with the test results and games I'm aware of and with the user feedback, so it continuously improves.A loosing line is of course still possible, but it should become more and more rare.

There are also different types of bad lines I think. For example many of the positions in Cerebellum are calculated with a time of about 70 seconds and endgame tablebases on a fast quadcore. so in a 5+1 match without endgame tablebases it may happen that the engine playing is not able to continue with the correct moves, even if they exist, because the thinking time is too short.

According to Thomas Zipproth's comments here, the full Cerebellum package will afford a high degree of flexibility and I will be curious to see what it offers compared to that currently available.

> It is getting time I buy Komodo. It seems excellent in pointing out errors in SF's eval.

Just my perception but I would describe Stockfish as the more aggressive engine capable of finding some very deep key moves whereas Komodo plays with a more positional, subtle style that may be beneficial in closed positions. From my observations Komodo used to have a better endgame technique but more recently that may have swung slightly in Stockfish' favour when it can pick up the win in the endgame too.

Houdini looks very competitive in the current TCEC and with another 2 to 3 months of improvement it will be interesting to see where that lies in style compared to the big two. I'm hoping the Tactical Mode will still be provided and that capability will not be sacrificed.

Excellent ! That suits me just fine !You just exposed the weakness in your Book.Just last night I won three games on c2p (www.come2play.com) with Black using the French Defense.I do hope more and more players use your Book. Heh heh.

Hi Om Why don't you play me sometime and show this noob how its done ?Not the first time I challenged you either.I'll PM you my UserId and you PM me yours, if you're upto the challenge, GrandPa.Let me know.Regards