Seminar • Artificial Intelligence • Mastering Board Games by External and Internal Planning with Language Models

Thursday, March 6, 2025 1:00 pm - 2:00 pm EST (GMT -05:00)

Please note: This seminar will take place in DC 1302.

Marc Lanctot, Research Scientist
Google DeepMind

While large language models perform well on a range of complex tasks (e.g., text generation, question answering, summarization), robust multi-step planning and reasoning remains a considerable challenge for them. In this paper we show that search-based planning can significantly improve LLMs’ playing strength across several board games (Chess, Fischer Random / Chess960, Connect Four, and Hex). 

We introduce, compare and contrast two major approaches: In external search, the model guides Monte Carlo Tree Search (MCTS) rollouts and evaluations without calls to an external engine, and in internal search, the model directly generates in-context a linearized tree of potential futures and a resulting final choice. Both build on a language model pre-trained on relevant domain knowledge, capturing the transition and value functions across these games. We find that our pre-training method minimizes hallucinations, as our model is highly accurate regarding state prediction and legal moves. Additionally, both internal and external search indeed improve win-rates against state-of-the-art bots, even reaching Grandmaster-level performance in chess while operating on a similar move count search budget per decision as human Grandmasters. The way we combine search with domain knowledge is not specific to board games, suggesting direct extensions into more general language model inference and training techniques.


Bio: Marc Lanctot is a research scientist at DeepMind. His research interests include multiagent reinforcement learning, computational game theory, multiagent systems, and game-tree search. In the past few years, Marc has investigated general agent evaluation based on computational social choice and game-theoretic approaches to multiagent reinforcement learning with applications to fully and partially observable games.

Marc received a Ph.D. degree in artificial intelligence from the Department of Computer Science, University of Alberta in 2013. Before joining DeepMind, Marc completed a Postdoctoral Research Fellowship at the Department of Knowledge Engineering, Maastricht University, in Maastricht, The Netherlands, on Monte Carlo tree search methods in games.