To err is not just human: U of T researchers develop AI that plays chess like a person
For more than a decade, advances in artificial intelligence have made computers capable of consistently defeating humans in chess. But despite their clever moves, they’ve made relatively lousy teachers – until now.
By trading raw power for a more human-like playing style, a new neural network chess engine developed by University of Toronto researchers and collaborators is poised to make for a more effective learning tool and teaching aid.
The Maia Chess engine can accurately predict the way humans of different skill levels play chess and can even point out the mistakes a player should work on to improve their game.
With this new chess engine, the researchers open the door to better human-AI interaction in chess and other domains
Ashton Anderson, assistant professor in the department of computer science, and PhD student Reid McIlroy-Young collaborated on the project with Jon Kleinberg, a professor of computer science and information science at Cornell University, and Siddhartha Sen, principal researcher at Microsoft Research.
The new chess engine emerged from their paper, “Aligning Superhuman AI With Human Behavior: Chess as a Model System,” presented last year at the Association for Computing Machinery SIGKDD Conference on Knowledge Discovery and Data Mining.
Faced with a problem to solve, self-trained AIs can take a very different route to a solution than a human might. On top of that, a human can also have a hard time learning how the AI completed its task. To bridge the gap in understanding, the researchers attempted to model the individual steps humans take to solve a task, rather than focus on overall human performance.
For Maia, the researchers asked themselves: Instead of designing an AI that focused on the task of playing chess well, what if we designed one that would play chess well in a human-like manner?
“If we algorithmically captured human style, human ability, and crucially, human errors, maybe we would have a chess AI that was much easier to learn from and play with,” Anderson explains, adding that this approach could be expanded to other domains of AI research.
AI first demonstrated its superiority over human chess players in 1997 with IBM’s Deep Blue beating then-world champion Garry Kasparov. Now, desktop computers can run chess engines even stronger than Deep Blue.
The U of T researchers trained nine versions of Maia, corresponding to nine different chess skill levels. At each level, the deep learning framework was trained on 12 million online human games.
By training on games played by humans instead of training itself to win every time, Maia can more closely match human play, move by move, the researchers say.
Other attempts to develop chess engines that match human play have been somewhat effective, but Maia’s performance sets the bar higher, they added.
Versions of two popular chess engines, Stockfish and Leela Chess Zero, match human moves less accurately and without always faithfully mimicking human play at specific skill levels. Maia is built on the open-source AlphaZero/Leela Chess framework and is trained on real human games, rather than games played against itself. It achieves higher accuracy than other engines, correctly predicting human moves more than half the time.
In addition to predicting smart moves, Maia is also adept at predicting human mistakes, even egregious ones or “blunders.” This can be especially helpful for players looking to improve.
“Current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level. They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can’t separate out what you should work on,” Anderson says. “Maia can identify the repeated mistakes you make that are typical of your level, and that you could work on to improve. No other chess AI has that ability.”
The researchers are currently developing a personalized version of Maia that can play like a particular person.
“This will make our training tools even more powerful: you could have your own personalized AI that plays like you do, and it could point out the mistakes you make that it predicts you will make – in other words, mistakes you make so often that it correctly guesses you will do it again,” Anderson explains.
Looking ahead, the team plans to conduct a “chess Turing test” to see if human players can tell the difference between a human opponent and Maia.
Ultimately, the researchers hope Maia demonstrates the value in considering the human element when designing AI systems.
“We want to show that AI systems can be easier to work with and learn from if they are built with human interaction, collaboration, and improvement in mind,” Anderson says.
Chess players can face off against three versions of Maia on the free online chess server Lichess: Maia 1100, Maia 1500, and Maia 1900.
Ashton Anderson was supported in part by an NSERC grant, a Microsoft Research Award, and a CFI grant. Jon Kleinberg was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a MURI grant, and a MacArthur Foundation grant.