Do you think the developers of LC0 can get the binaries used by the team of A0?

Unless Google's production systems have changed a lot since I worked there a few years ago, it's highly unlikely you'll be able to take production binaries built inside Google and get them to run on a regular install. There's just so much from the production environment (Borg, Chubby, etc.) that you don't have on the outside. Similarly, there would be so many Google-specific libraries linked in there that giving out binaries would just be out of the question from a confidentiality perspective.

Google open-sources a lot of stuff, generally by untangling it from Google3 and packaging it up into something more standard for the outside. You'll have to hope for that.

So you also think the developers of LC0 are not able to make reverse engineering on a Google product to gain information for the developing of LC0.

When he asked for the binaries I guess he probably asked for NN. Since format is well known, there is absolutely no reason why would they wouldn't share it if they are really so devoted to open-source and sharing knowledge.

I didn't say Google was devoted to open source. I said they open sourced a lot of stuff.

I don't actually know whether the internal TensorFlow is compatible with open-source TF these days. (It probably is, though.) The trained network would be useful, of course, but it's far from the entire story, and just a one-off binary dump wouldn't be too useful in the long run.

But of course they are never gonna do it, because one could actually run those NNs and realize they are not nearly as strong as they suggest in the publication...

Accusing someone of lying in a research publication is a pretty strong claim. I have to wonder, do you take the same stance towards other private engines? I'm not entirely sure why there needs to be so much hostility.

When he asked for the binaries I guess he probably asked for NN. Since format is well known, there is absolutely no reason why would they wouldn't share it if they are really so devoted to open-source and sharing knowledge.

I don't actually know whether the internal TensorFlow is compatible with open-source TF these days. (It probably is, though.) The trained network would be useful, of course, but it's far from the entire story

But isn't one of the main points of a TF SavedModel that it is compatible and portable etc.? You can just give it to your friends to use.

When he asked for the binaries I guess he probably asked for NN. Since format is well known, there is absolutely no reason why would they wouldn't share it if they are really so devoted to open-source and sharing knowledge.

I didn't say Google was devoted to open source. I said they open sourced a lot of stuff.

They open source only for PR purposes. They are far less sincere about open source than Microsoft which is in itself really ironic.

I don't actually know whether the internal TensorFlow is compatible with open-source TF these days. (It probably is, though.) The trained network would be useful, of course, but it's far from the entire story, and just a one-off binary dump wouldn't be too useful in the long run.

Why would something so basic as neural network architecture model not be compatible between open source and internal TF????
If that was really the case, that would be one more hell of an argument that anything that Google open sourced was to gain PR or increase their revenue and that it has nothing to do with true spirit of open source.
NN is everything. The rest could be written in couple of days in TF with what was already available in that preprint and AG0 paper. NPS don't mean a thing for actual verification of performance.

But of course they are never gonna do it, because one could actually run those NNs and realize they are not nearly as strong as they suggest in the publication...

Accusing someone of lying in a research publication is a pretty strong claim. I have to wonder, do you take the same stance towards other private engines? I'm not entirely sure why there needs to be so much hostility.

No other private engine does run PR campaign for selling of their online services (like cloud TPUs). Now that Google actually realized that performance of their could TPUs (v2) is on par with RTX 2070 (yes 2070 not 2080ti) that costs 500$ and that Lc0 is almost on par with A0 (or maybe even stronger, we will never know coz NNs are private) they finally decided to really publish something and give us a little bit more insight and ppl instantly feel like everyone should be enormously grateful to them.
You are not working in Google PR department, why so much need to defend them? Are they some kind of holly cow, so writing anything bad about them is forbidden?
And no I don't hold the same stance towards other private engines, coz none of them is created by a giant mean corporation, plus their performance is easily verifiable and in most cases already very well established.

clumma wrote: ↑
Fri Dec 07, 2018 1:26 am
Can you help me locate the games AZ played against Brainfish? They don't seem to have their own file, and I don't see any identifying info in alphazero_vs_stockfish_all.pgn

Only games from the primary evaluation and TCEC openings have been released (no opening books).

D'oh!

Why?

I've been wanting to see AZ v BF since last year and the first thing I checked with this paper is whether you tried it and 99% of my excitement about it is that you did.

Also the results look really weird. White wins went down but black wins went up??

-Carl
Top

Geonerd
Posts: 44
Joined: Fri Mar 10, 2017 12:44 am

Re: Alphazero news

Post by Geonerd » Fri Dec 07, 2018 3:51 am

IanO wrote: ↑
Fri Dec 07, 2018 1:19 am
Even more exciting: they released the full game scores of the hundred game matches for all three games, chess, shogi, and go!

Well....Well A very very Interesting Development....Following up on last weeks Alphazero revival (see youtube) during the controversial Worlds Championship match between Caruana and Carlsen...now about one year to the day after the Deepmind Group claims of "Destruction Of SF 8" (with no opening book) Additional games and claims have emerged...Loads of new information to review and digest...(This may take some time) I found interesting in the data and article the they used Knightcap...Meep... and Giraffe as the learning examples for Alpha Zero (No one ever seems to mention the amazing and suprising finds (NeuroGrapeFruit or NeuroStockfish) Meanwhile Leela0 and SF 10 have new versions out....This debate continues with no end in sight...

Afraid not! AlphaZero is taking up all my time these days, and it's a very exciting project with lots of uncharted territory ahead . AlphaZero is basically what I ever wanted Giraffe to become... and then a lot more. I have never been this excited about computer chess my whole life.

Congrats on all that you and Google are doing for computer chess. Looking forward to your future achievements.

That's plus 32 elo. If the opponent was actually SF9 (does anyone know?) that would be about what I'd expect from SF10 under roughly TCEC conditions. So perhaps they are about equal given $20,000 or so hardware for each. So it's not yet clear that NN plus MCTS has surpassed normal alpha-beta in chess. What they have demonstrated is a good way to utilize the GPU for chess, as Lc0 is also doing. Perhaps there are better ways yet to be found.

I think it goes deeper than that. If someone, even someone whose knowledge and technical savvy you deeply respected, had told you a few years ago that they could get a program that was nearly one thousand times slower in NPS to compete and even beat the best of the day on a PC, I am guessing you would have rolled your eyes a them. I know I would have.

Yet, that is what AlphaZero has done, and we are even able to bring this to the home user's PC thanks to Deep Mind's generosity with their knowledge, as well as the fantatsic Leela Chess community efforts. In other words that is not limited to some absurdly exotic hardware no one could ever hope to obtain. I am not even commenting on the whole self-learning process, which is what has been the focus.

What is more, to achieve this, you are looking at an incredibly evolved eval function (not precisely, but it helps illustrate the point) that has roughly 28 million values compared to a few thousand at most for even the most sophisticated predecessors. In all the years I have seen discussions on the fight between smart searchers and fast searchers, I have never seen anyone come close to imagining that is how enormous a difference it would take, much less realize and prove it.

It is pure genius.

"Tactics are the bricks and sticks that make up a game, but positional play is the architectural blueprint."