This was a classic match between AlphaZero the chess program from Google, which is first ever program to be AI based vs Stockfish, the current world’s best Chess program.
https://en.chessbase.com/post/alpha-zero-comparing-orang-utans-and-apples
Not only the match was like comparing apples to oranges – as shown in this article, more I think about it, more it appears to me that the match suggests that AlphaZero actually lost to Stockfish!
To be precise, although AlphaZero won by a wide margin, it seems that it currently is actually inferior to Stockfish.
Here is my reasoning –
Why did Google not follow proper Chess rules and have both programs use equivalent resources. Clearly results would have been much more impressive for Google had AlphaZero won in that setup. Since Google did not choose to go that route suggests that they feared something (losing to Stockfish).
Why was AlphaZero learning limited to 4 hours. Why not 8 or 24 or even couple of weeks!. Obviously Google had all the resources to do that. One possibility is that the AlphaZero’s learning saturated after 4 hours and additional time would not have resulted in any significant improvement. This is not surprising as saturation after certain time is a know phenomenon in ML.
If the above is true, then it is even more concerning! It actually implies that AlphaZero has reached its limits in learning!, at least with current state of technology and so as of now can’t be made superior to Stockfish.
Neeraj
Recent Comments