MCTS Based on Simple Regret

Research output: Contribution to conferencePaperpeer-review

11 Scopus citations

Abstract

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games and Markov decision processes, is based on UCB, a sampling policy for the Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However, search differs from MAB in that in MCTS it is usually only the final “arm pull” (the actual move selection) that collects a reward, rather than all “arm pulls”. Therefore, it makes more sense to minimize the simple regret, as opposed to the cumulative regret. We begin by introducing policies for multiarmed bandits with lower finite-time and asymptotic simple regret than UCB, using it to develop a two-stage scheme (SR+CR) for MCTS which outperforms UCT empirically. Optimizing the sampling process is itself a metareasoning problem, a solution of which can use value of information (VOI) techniques. Although the theory of VOI for search exists, applying it to MCTS is non-trivial, as typical myopic assumptions fail. Lacking a complete working VOI theory for MCTS, we nevertheless propose a sampling scheme that is “aware” of VOI, achieving an algorithm that in empirical evaluation outperforms both UCT and the other proposed algorithms.

Original languageEnglish
Pages570-576
Number of pages7
StatePublished - 1 Jan 2012
Event26th AAAI Conference on Artificial Intelligence, AAAI 2012 - Toronto, Canada
Duration: 22 Jul 201226 Jul 2012

Conference

Conference26th AAAI Conference on Artificial Intelligence, AAAI 2012
Country/TerritoryCanada
CityToronto
Period22/07/1226/07/12

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'MCTS Based on Simple Regret'. Together they form a unique fingerprint.

Cite this