Singular Extensions: Example

In the seminal paper on singular extensions published by Anantharaman, Campbell, and Hsu in 1988, a study based on 300 test positions (Fred Reinfeld's Win at Chess Collection) was conducted that showed how this search extension strategy greatly improved the tactical capabilities of their chess computer Deep Thought. As a particularly intriguing example, the performance on position #213 was discussed in detail, showing that singular extensions enabled detection of a mate in 18 on this relatively complex middlegame position in 65 seconds, whereas this very system with singular extensions deactivated failed to find the mate in reasonable time.

This test has been repeated for Fischerle 0.9.70 SE 64Bit. Employing its standard settings for singular extensions (no singular extensions at Cut nodes; only moves suggested by the transposition table the value of which is marked there as exact or lower bound can become singularity candidates) and using 256 MB of hashtable space, Fischerle finds the mate in 18 in just 11 seconds, processing only 3,874K nodes at a nominal search depth of 14:

Further tests showed that even with singular extensions deactivated, this version of Fischerle finds the mate in only 571 seconds (considering 220,825K nodes at nominal depth 20), which is still impressive. This gives evidence that singular extensions might be less important in more modern state-of-the-art engines that already employ a carefully optimized blend of other variable search depth strategies (extensions and reductions). Thus, while there yet seems to be a considerable number of positions in which singular extensions nicely improve the tactical capabilities, this might partly explain why recent research on singular extensions no longer confirms the highly positive results that the inventors of this strategy described back in the year 1988. Another reason seems to be that the original research focused on a restricted number of test positions (in particular, Reinfeld's Win at Chess Collection) and on a quite limited number of computer-vs-human games, while the comprehensive evaluation as performed nowadays, which is typically based on thousands of engine-vs-engine matches, is presumably quite a different story.