Why don't they disclose what their evaluation is: that will be a big step towards knowing the truth.

They can't. The evaluation is a sequence of numbers specifying myriad weights on umpteen-dozen layers of a neural network. This aspect (of the original AlphaGo) in contrast to Stockfish is addressed in my Feb. 2016 article https://rjlipton.wordpress.com/2016/02/07/magic-to-do/ That this is endemic to "deep learning" has energized a counter-push toward "Explainable AI."

What I wish to know better, incidentally, is the memory footprint of their trained network and how portable it is.

They are still tuning at the level of a 2850 single core engine, so things will just get significantly more difficult in the future, when the quality of the terms will have much higher impact.

Mentioned in the paper, the eval is non-linear, not like the current engines that
uses linear eval functions. They are not tuning the eval, the AI itself is tuning the eval
autonomously without human input.