Lyudmil Tsvetkov wrote:The training matches are different from the 100 games match with Stockfish.

Yes, the plot on the diagram is the training game, but 100 games per openning was played. 50-50, and the score below the diagram is on AlphaZero perspective.

12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?

12 opennings x 100 = 1,200 games total.

Before we were talking about 300 and 100, now 1200 suddenly appears...
The 64/36 score certainly comes from 100 games, unless they assigned random points for a win.
And in that sample, I see Alpha playing just 1.d4 and 1.Nf3.

Read the series of posts properly. It is 100 games per openning, you clearly don't understand Table 2.

300 games because you were talking about 1.e4 earlier which appears in 6 diagrams.
How many is 50 x 6 ?
You were claiming that AlphaZero didn't play 1.e4, i told you it did! 300 times it did play 1.e4 against SF8.

See the total summation below: 1,200 games for all 12 opennings. Come on man, even this very
basic stuff we argue?

Do you have the pgn for the training games, which, btw., are claimed to run into the thousands?

Note that all Training games are self-play (no SF8 involved). The 1,200 are all match games against SF8.
No data given in PDF about the total number of self-play for learning, neither were the self-play PGN published. Only SF8 match was published.
100 game match versus SF8 was played on all 12 common ECO openings.
I guess you are confused of the plotted graph being put beside the
diagram. The graph is self-play, the diagram is SF8 match.
Try to read the caption of Table 2 properly. 37 times if need be.

I am unable to read the pdf at all, when I open it, and I get a headache.
I usually don't read books that reference material; you either do something original, or better do nothing at all.

I guess too much info is simply tremendously unclear.
You still persist with your claim the 100-game match was played with 12 different openings.
100%12=4, so there is a remainder, and this is fully absurd.

Lyudmil Tsvetkov wrote:The training matches are different from the 100 games match with Stockfish.

Yes, the plot on the diagram is the training game, but 100 games per openning was played. 50-50, and the score below the diagram is on AlphaZero perspective.

12 openings with reversed colours don't square in any way with 100 played games, so did they actually left some openings played more than others, or did not they flip colours?

12 opennings x 100 = 1,200 games total.

Before we were talking about 300 and 100, now 1200 suddenly appears...
The 64/36 score certainly comes from 100 games, unless they assigned random points for a win.
And in that sample, I see Alpha playing just 1.d4 and 1.Nf3.

Read the series of posts properly. It is 100 games per openning, you clearly don't understand Table 2.

300 games because you were talking about 1.e4 earlier which appears in 6 diagrams.
How many is 50 x 6 ?
You were claiming that AlphaZero didn't play 1.e4, i told you it did! 300 times it did play 1.e4 against SF8.

See the total summation below: 1,200 games for all 12 opennings. Come on man, even this very
basic stuff we argue?

Do you have the pgn for the training games, which, btw., are claimed to run into the thousands?

Note that all Training games are self-play (no SF8 involved). The 1,200 are all match games against SF8.
No data given in PDF about the total number of self-play for learning, neither were the self-play PGN published. Only SF8 match was published.
100 game match versus SF8 was played on all 12 common ECO openings.
I guess you are confused of the plotted graph being put beside the
diagram. The graph is self-play, the diagram is SF8 match.
Try to read the caption of Table 2 properly. 37 times if need be.

100%12=4, so there is a remainder, and this is fully absurd.

1,200 games it was, read the table correctly. You are insisting total games were only 100, yes 100 for EACH openning, but there are 12 opennings so 12 times 100 is 1,200.
Are you serious? This is pre-school basic math

Lyudmil Tsvetkov wrote:Why don't they disclose what their evaluation is: that will be a big step towards knowing the truth.

They can't. The evaluation is a sequence of numbers specifying myriad weights on umpteen-dozen layers of a neural network. This aspect (of the original AlphaGo) in contrast to Stockfish is addressed in my Feb. 2016 article https://rjlipton.wordpress.com/2016/02/07/magic-to-do/ That this is endemic to "deep learning" has energized a counter-push toward "Explainable AI."

What I wish to know better, incidentally, is the memory footprint of their trained network and how portable it is.

In what way the fact that it uses exponentially more eval terms/neurons than SF makes it impossible to release the code?

They are still tuning at the level of a 2850 single core engine, so things will just get significantly more difficult in the future, when the quality of the terms will have much higher impact.

Btw., if they did that in 4 hours, then, some 400 hours from now, that is, in 2 weeks or so, they will have solved chess.
You really believe in that?
Let's bet there will not be new update in 2 weeks' time claiming they have reached the 4000 elo level?

Not only that, but there will not be a new update in a month or 2 months' time. And probably in half a year too.

M ANSARI wrote:Some of the games are really remarkable. I think a PC can be configured to play in a similar fashion. Maybe massive GPU floating point calculations can actually work for chess ... just needs a different approach. I always thought that a chess engine that can use a Monte Carlo based GPU as a partner would be a very powerful thing. Especially in avoiding locked positions where the horizon effect hurts traditional engines in identifying fortress positions. This really is a game changer, although I would want to know how the hardware stacks up. Hard to tell if we are comparing hardware of similar strength. But some of the positions it seems that even very powerful hardware cannot find the moves on a normal chess engine given enough time to equalize the hardware. Need more information on this but it does look like a major breakthrough.

No game changer, rather than scam-changer.
50/1 hardware advantage, come on.

maac wrote:Beyond obscure hypothesis about hardware, the point is the METHOD,
something totally revolutionary, the fact that the learning process took only hours, days
from zero to super GM. The games are astounding. Why try to disparage this?

Come on, Miguel, do you really believe in the 4 hours figure?
If that is so, in 2 weeks we will have a 5000 elo engine.

Wanna bet they will not release in 2 weeks?
And in 2 months?
And probably in 2 years?

The hardware part is very well known, 4 TPUs are around 1000 standard cores at least. 1000 vs 60? Plus other relative disadvantages to SF?
Give me a break.

IanO wrote:Wonderful result! Have they publicized the shogi results or published those match games anywhere? Unlike the chess and go results, that appeared to be a clear advance over the state-of-the-art.

I see there is lots of bickering about the fairness of the Stockfish match. I look forward to AlphaZero's participation in the World Computer Chess Championship, where all participants can use as much preparation and beefy hardware as they want! That appears to be the appropriate forum for users of custom hardware to strut their stuff. Heck, DeepMind would be superstars at the attached computer games conference!

Another VERY IMPORTANT notes:
1, SF used only 1 GB (!!) hash table
2, Alpha Zero did not start from zero knowledge about chess
because it was feeded a lot of human games at start up.
This is the explanation why Alpha Zero plays openings so human like.
I think it would be more correct if Stockfish would get 64 GB hash
and a good human opening book like Fritz Power Book.

Lyudmil Tsvetkov wrote:It is not at all clear to me where were books used and where not.

I'm sure opening books were not used...
In the early self-play games things like 1.a3, 1.a4, etc. were probably tried by AlphaZero...
eventually it learned that 1. e4 or 1. d4 had the highest success rates.

Books or no books, I think AlphaZero would still demolish SF8.
Just look at this game 9, it was a decent French Defence by SF8, but it was dismantled with
amazing tactical and strategic shots by AlphaZero which seems to be beyond the reach of alpha-beta engines.

Would be nice if we can try to feed some difficult epd positions into AlphaZero,
to estimate its ELO strength.

I am considering g4 and Bg6, especially g4 was my first choice in under a second.
People don't believe frequently advanced long chains are stronger than a pice, but see what happens here...

I guess the main fault is thay are testing at 1 minute. Long chains have failed in SF at least 5 times or so, and still they are an extremely valid concept.

Books are important, as I have been correctly claiming for a very long time, the French is dangerous or even lost, but people have hard ears.
The game was lost much earlier, already in the opening.

LOL there goes your rediculous claims again. You haven't claimed French is lost. Lets be
honest please, you only claimed French WINAWER is lost, nothing else.
You claimed all other variations of French are playable but not the WINAWER line.

Winawer discussion was a thread on its own, I remember clearly.

Come on, we had to choose a more precise variation, in order to test it in a game.
The structure of the Advanced and the Winaver/McCutcheon/Classical, etc., is the same, white has a central e5 pawn, which e7-e6 has allowed, and a lot more space.

Anyway, those are just weak engines to assess anything. Perfect play is at 5000-6000 elos at least, and that Alpha will never progress much.