Doing Better Than UCT: Rational Monte Carlo Sampling in Trees

David Tolpin, Eyal Shlomo Shimony

Research output: Working paper/PreprintPreprint

Abstract

UCT, a state-of-the art algorithm for Monte Carlo tree sampling (MCTS), is based on UCB, a sampling policy for the Multi-armed Bandit Problem (MAB) that minimizes the accumulated regret. However, MCTS differs from MAB in that only the final choice, rather than all arm pulls, brings a reward, that is, the simple regret, as opposite to the cumulative regret, must be minimized.
This ongoing work aims at applying meta-reasoning techniques to MCTS, which is non-trivial. We begin by introducing policies for multi-armed bandits with lower simple regret than UCB, and an algorithm for MCTS which combines cumulative and simple regret minimization and outperforms UCT. We also develop a sampling scheme loosely based on a myopic version of perfect value of information. Finite-time and asymptotic analysis of the policies is provided, and the algorithms are compared empirically.
Original languageEnglish GB
StatePublished - 2011

Publication series

NamearXiv preprint arXiv:1108.3711

Fingerprint

Dive into the research topics of 'Doing Better Than UCT: Rational Monte Carlo Sampling in Trees'. Together they form a unique fingerprint.

Cite this