Hard Light Productions Forums
Off-Topic Discussion => General Discussion => Topic started by: Luis Dias on December 06, 2017, 03:42:40 pm
-
So what happens when you take Alpha Go Zero's promise of being the kind of algorithm that only needs someone to give it the rules of the game and it learns by itself so well that it can anihilate its predecessors (https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/) 4 hours of play on one of the oldest and cherished games of all time?
Well, it learns it to the point of anhilitating the top computer engine, that's what!
https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
Chess changed forever today. And maybe the rest of the world did, too.
A little more than a year after AlphaGo sensationally won against the top Go player, AlphaZero has obliterated the highest-rated chess engine.
Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn't stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.
Oh, and it took AlphaZero only four hours to "learn" chess. Sorry humans, you had a good run.
Kasparov was excited, Magnus Carlsen's second GM Peter Nielsen said:
"After reading the paper but especially seeing the games I thought, well, I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know."
My mind was blown away. When I saw what AlphaGo Zero did and the way it accomplished what it did, I immediately thought of chess and what it could do with it. If it could achieve the same result, it would definitely prove - in my head at least - its concept. Apparently, it did. And it took 4 hours to do so. I'm just ... wow.
The full paper is here. https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/
-
I was kind of unimpressed by the result specifically, since:
1. The hardware Alpha Go used was considerably better than Stockfish's (and they chose really odd configurations, 64 threads and only 1 GB hash?!?)
2. They put Stockfish at a disadvantage by not allowing it to use an opening book
Now don't get me wrong, the level Alpha Go managed to reached in such a short time is a tremendous achievement, but the way they stacked the odds in their favor was kind of sickening.
That said, the most exciting thing to come out of this have been the openings that it chose to play. People are having some fits with it not choosing to play some openings (e.g. King's Indian).
-
That ninth game with Kxd2, Jesus Christ what a game.
I was kind of unimpressed by the result specifically, since:
1. The hardware Alpha Go used was considerably better than Stockfish's
4 TPUs by alpha vs 64 threads by stockfish. You may be right, but given how stockfish (and any other chess engine for that matter) won't scale any further if you add more threads to it (diminishing returns and so on), this is a meaningless issue. With the hardware used, Stockfish is rated around 3400 ELO.
2. They put Stockfish at a disadvantage by not allowing it to use an opening book
The games I've seen, I can hardly blame the results on the lack of an opening book. It's evident from the examples that the alpha engine is in another altogether level. It toys with stockfish with its beautiful positional play.
Now don't get me wrong, the level Alpha Go managed to reached in such a short time is a tremendous achievement, but the way they stacked the odds in their favor was kind of sickening.
Now don't get me wrong, but this kind of cynicism reminds me a lot when Kasparov lost to Deep Blue and went on on a rampage on how the conditions weren't perfect and so on.
That said, the most exciting thing to come out of this have been the openings that it chose to play. People are having some fits with it not choosing to play some openings (e.g. King's Indian).
Lo and behold, it plays the Berlin. Bwhahahaha.
-
I was kind of unimpressed by the result specifically, since:
1. The hardware Alpha Go used was considerably better than Stockfish's
4 TPUs by alpha vs 64 threads by stockfish. You may be right, but given how stockfish (and any other chess engine for that matter) won't scale any further if you add more threads to it (diminishing returns and so on), this is a meaningless issue. With the hardware used, Stockfish is rated around 3400 ELO.
1 GB is laughably low with 4 threads, nevermind 64. And going by their node per second count, their hardware probably doesn't even have 64 cores.
Also, chess engines are some of the few pieces of software that can achieve superlinear speedups from adding cores so...
2. They put Stockfish at a disadvantage by not allowing it to use an opening book
The games I've seen, I can hardly blame the results on the lack of an opening book. It's evident from the examples that the alpha engine is in another altogether level. It toys with stockfish with its beautiful positional play.
Maybe so, but there are some instances that you can see the lack of an opening book having a significant influence, e.g. 9... c4?! in the 4th example game they gave in their article.
Now don't get me wrong, the level Alpha Go managed to reached in such a short time is a tremendous achievement, but the way they stacked the odds in their favor was kind of sickening.
Now don't get me wrong, but this kind of cynicism reminds me a lot when Kasparov lost to Deep Blue and went on on a rampage on how the conditions weren't perfect and so on.
So you like seeing the paper's authors setting up a strawman even when they don't need to?
That said, the most exciting thing to come out of this have been the openings that it chose to play. People are having some fits with it not choosing to play some openings (e.g. King's Indian).
Lo and behold, it plays the Berlin. Bwhahahaha.
And the French of all things.
-
I was kind of unimpressed by the result specifically, since:
1. The hardware Alpha Go used was considerably better than Stockfish's
4 TPUs by alpha vs 64 threads by stockfish. You may be right, but given how stockfish (and any other chess engine for that matter) won't scale any further if you add more threads to it (diminishing returns and so on), this is a meaningless issue. With the hardware used, Stockfish is rated around 3400 ELO.
1 GB is laughably low with 4 threads, nevermind 64. And going by their node per second count, their hardware probably doesn't even have 64 cores.
Also, chess engines are some of the few pieces of software that can achieve superlinear speedups from adding cores so...
You're right, I hadn't noticed the 1 GB hash thing. It's pathetic. Should have been 64 at the very least.
Maybe so, but there are some instances that you can see the lack of an opening book having a significant influence, e.g. 9... c4?! in the 4th example game they gave in their article.
Nice catch.
So you like seeing the paper's authors setting up a strawman even when they don't need to?
Your 1 GB hash made me reevaluate this stuff. There's also the issue that arguably 4 TF units are utter powerhouses compared to the CPUs stockfish used. More data needs to be released for a better evaluation of what went on, and what is and what is not fair in comparing CPUs vs GPUs is also a big question.
And the French of all things.
At least, the games were fascinating.
-
regardless of all the issues being brought, the games were truly entertaining, and I honestly believe that there's something to be said about deepmind's own "organic" pattern seeking algorithm. It does give rise to beautiful chess, see below