Lyudmil Tsvetkov wrote:Why don't they disclose what their evaluation is: that will be a big step towards knowing the truth.

They can't. The evaluation is a sequence of numbers specifying myriad weights on umpteen-dozen layers of a neural network. This aspect (of the original AlphaGo) in contrast to Stockfish is addressed in my Feb. 2016 article https://rjlipton.wordpress.com/2016/02/07/magic-to-do/ That this is endemic to "deep learning" has energized a counter-push toward "Explainable AI."

What I wish to know better, incidentally, is the memory footprint of their trained network and how portable it is.

They are still tuning at the level of a 2850 single core engine, so things will just get significantly more difficult in the future, when the quality of the terms will have much higher impact.

Mentioned in the paper, the eval is non-linear, not like the current engines that
uses linear eval functions. They are not tuning the eval, the AI itself is tuning the eval
autonomously without human input.

Werewolf wrote:1 GB hash allows them to say SF nps are really high while hiding that they’ve weakened its search.

I would like to see a rematch at a better time control, much more hash, all tablebases, and a tournament quality opening book and the latest asmfish.

I bet it’d be much closer then.

If you look at results of B40 Sicilian difference is only 38.5Elo.
With the lastest Brainfish using Cerebellum limited to for example 12 moves, 32GB hash, 6-men Syzygy and 60'+15'' TC I'm pretty confident Alpha0 would not win, at the best result would be indecisive.
If I had access to the training games, I could construct the book that would give SF +100 Elo advantage without much trouble.