In n-player sequential move games, the second root-player move appears at tree depth n + 1. Depending on n and time, tree search techniques can struggle to expand the game tree deeply enough to find multiple-move plans of the root player, which is often more important for strategic play than considering every possible opponent move in between. The minimax-based Paranoid search and BRS+ algorithms currently achieve state-of-the-art performance, especially at short time settings, by using a generally incorrect opponent model. This simplifying model enables Alpha-Beta pruning, thus allowing the search to reach follow-up root player moves at greater search depths. This paper introduces abstraction over opponent moves to MCTS in multiplayer games, and uses its synergies with progressive widening in order to outperform these state-of-the-art minimax-type baselines. Progressive widening makes the search tree selective and deep enough to reach the root player's next moves, and abstraction over opponent moves generalizes value estimates of the root player's moves online across different opponent moves. In contrast to paranoid search approaches, opponent models do not have to be simplified. Experiments show that combining progressive widening with opponent move abstraction (MCTS-OMA-PW) leads to improved performance in the multiplayer games Chinese Checkers, Rolit, and Focus. Our work thus paves the way for improved multiplayer search by online generalisation that focuses on the root player's actions, with the potential of improving real-time MCTS applications as well as training in expert iteration and other meta-algorithms where short time settings are relevant.

Game AI, Monte Carlo Tree Search, Multi-agent systems, Multiplayer games, Monte Carlo methods
IEEE Conference on Games
Intelligent and autonomous systems

Baier, H.J.S, & Kaisers, M. (2020). Guiding multiplayer MCTS by focusing on yourself. In 2020 IEEE Conference on Games (CoG) (pp. 550–557). doi:10.1109/CoG47356.2020.9231603