Monte-Carlo Tree Search (MCTS) is a family of sampling-based search algorithms widely used for online planning in sequential decision-making domains, and at the heart of many recent breakthroughs in AI. Understanding the behavior of MCTS agents is non-trivial for developers and users, as it results from often large and complex search trees, consisting of many simulated possible futures, their evaluations, and relationships to each other. This paper is presenting our ongoing exploration of possible explanations for MCTS decisionmaking and behavior. It is for the first time trying to tackle some of the challenges previously posed for explainable search, which include: meaningfully summarizing the space of possible futures spanned by the available actions of the AI and their possible consequences, in order to explain the AI’s choices between them; considering such explanations not only as static objects but as interactive conversations between user and AI; and understanding explanation not only as a one-way information flow from the AI to the user, but as a tool for human-AI collaboration and for leveraging both AI and human capabilities in problem solving.

Explainable Agency in Artificial Intelligence Workshop. 35th AAAI Conference on Artificial Intelligence
Intelligent and autonomous systems

Baier, H., & Kaisers, M. (2021). Towards explainable MCTS. In Proceedings of the AAAI Conference on Artificial Intelligence.