Let's face reality, the sources are available for everybody free of use. They won't go away. Some will produce stronger engines based on these free available sources, I want to know.

Besides of this, there is no shred of evidence Ippo & co are disassembled from Rybka. Sorry Vas.

Ed

For sure. If you have links to rating lists which are more inclusive, send me a PM, and I'll post them. I just get tired of using google to find the links, so I posted the ones I check from time to time.

Let's face reality, the sources are available for everybody free of use. They won't go away. Some will produce stronger engines based on these free available sources, I want to know.

Besides of this, there is no shred of evidence Ippo & co are disassembled from Rybka. Sorry Vas.

Ed

For sure. If you have links to rating lists which are more inclusive, send me a PM, and I'll post them. I just get tired of using google to find the links, so I posted the ones I check from time to time.

The rating list guys have to make nice negotiations/companionship with the companies/programmers who deliver the engines. especially the commercial engines/programmers make pressure on the rating list guys, some pressure diplomatic, some pressure between the lines,that they should not "support" these engines.some pressure is done not directly but by people from the surrounding of the programmer working field, the so called water-bearers (in german: wasserträger).they are often right hand of programmers, operate the programs, control the forums,some are even admins at playchess etc.

together with the mc_carthy-like computerchess police this creates the witch huntsthe strange censorship in some commercial forums such as hiarcs forum,rybka forum, talkchess (CCC) etc.

Does anybody have a proof that rating list ELO correlates to chess playing skill?

Or that the relationship between rating list ELO and chess playing skill is in any way linear?

Or that changes in rating list ELO map negatively or positively to decreases/increases in chess playing skill?

Or that rating lists composed from multi-games between similar machine entities actually measure anything useful at all?

Why not just throw these lists into the poubelle?

What is "playing skill" ? The rating lists presumably measure how engines performed against other engines using the match conditions, and the Elos can be used to estimate future results.

I agree that it is a bit of a red herring though. It's useful if you want to win engine matches, but, for me, this is one of the least interesting aspects of computer chess. I'd rather ask how useful I find an engine to analyze with, how enjoyable I find it to play against.

I can add a philosophical question that has recently entered been mentioned with the R4 release (and testing therein).

Why test "ponder off" while simultaneously allowing the engine to manage time? Why not just test old-style lightning chess with repeated "go movetime 10000" commands or the like? I can agree that "ponder on" tests the whole engine (at least if you don't attach the book to the engine) at chess-playing, but why bother to test time management once you've made the leap to turning ponder off? I think one author said that "ponder off" was rather arbitrary, and rather like saying "no qsearch". Given that time management is variously claimed to be worth as much as 20 ELO (or more), it seems that the "ponder off" lists might want to exclude this facet, especially if the idea of "ponder off" testing is to give an idea of how good of analysis to expect from the engine. Then again, some engines seem to have a different schema in analysis versus gameplay, so maybe no metric is perfect.