Abstract
UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games and Markov decision processes, is based on UCB, a sampling policy for the Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However, search differs from MAB in that in MCTS it is usually only the final “arm pull” (the actual move selection) that collects a reward, rather than all “arm pulls”. Therefore, it makes more sense to minimize the simple regret, as opposed to the cumulative regret. We begin by introducing policies for multiarmed bandits with lower finite-time and asymptotic simple regret than UCB, using it to develop a two-stage scheme (SR+CR) for MCTS which outperforms UCT empirically. Optimizing the sampling process is itself a metareasoning problem, a solution of which can use value of information (VOI) techniques. Although the theory of VOI for search exists, applying it to MCTS is non-trivial, as typical myopic assumptions fail. Lacking a complete working VOI theory for MCTS, we nevertheless propose a sampling scheme that is “aware” of VOI, achieving an algorithm that in empirical evaluation outperforms both UCT and the other proposed algorithms.
Original language | English |
---|---|
Pages | 570-576 |
Number of pages | 7 |
State | Published - 1 Jan 2012 |
Event | 26th AAAI Conference on Artificial Intelligence, AAAI 2012 - Toronto, Canada Duration: 22 Jul 2012 → 26 Jul 2012 |
Conference
Conference | 26th AAAI Conference on Artificial Intelligence, AAAI 2012 |
---|---|
Country/Territory | Canada |
City | Toronto |
Period | 22/07/12 → 26/07/12 |
ASJC Scopus subject areas
- Artificial Intelligence