VOI-aware MCTS

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in games and Markov decision processes, is based on UCB1, a sampling policy for the Multi-armed Bandit problem (MAB) that minimizes the cumulative regret. However, search differs from MAB in that in MCTS it is usually only the final "arm pull" (the actual move selection) that collects a reward, rather than all "arm pulls". In this paper, an MCTS sampling policy based on Value of Information (VOI) estimates of rollouts is suggested. Empirical evaluation of the policy and comparison to UCB1 and UCT is performed on random MAB instances as well as on Computer Go.

Original languageEnglish
Title of host publicationECAI 2012 - 20th European Conference on Artificial Intelligence, 27-31 August 2012, Montpellier, France - Including Prestigious Applications of Artificial Intelligence (PAIS-2012) System Demonstration
PublisherIOS Press
Pages929-930
Number of pages2
ISBN (Print)9781614990970
DOIs
StatePublished - 1 Jan 2012
Event20th European Conference on Artificial Intelligence, ECAI 2012 - Montpellier, France
Duration: 27 Aug 201231 Aug 2012

Publication series

NameFrontiers in Artificial Intelligence and Applications
Volume242
ISSN (Print)0922-6389

Conference

Conference20th European Conference on Artificial Intelligence, ECAI 2012
Country/TerritoryFrance
CityMontpellier
Period27/08/1231/08/12

Fingerprint

Dive into the research topics of 'VOI-aware MCTS'. Together they form a unique fingerprint.

Cite this