TY - GEN
T1 - Selecting computations
T2 - 28th Conference on Uncertainty in Artificial Intelligence, UAI 2012
AU - Hay, Nicholas
AU - Russell, Stuart
AU - Tolpin, David
AU - Shimony, Solomon Eyal
PY - 2012/12/1
Y1 - 2012/12/1
N2 - Sequential decision problems are often approximately solvable by simulating possible future action sequences. Metalevel decision procedures have been developed for selecting which action sequences to simulate, based on estimating the expected improvement in decision quality that would result from any particular simulation; an example is the recent work on using bandit algorithms to control Monte Carlo tree search in the game of Go. In this paper we develop a theoretical basis for metalevel decisions in the statistical framework of Bayesian selection problems, arguing (as others have done) that this is more appropriate than the bandit framework. We derive a number of basic results applicable to Monte Carlo selection problems, including the first finite sampling bounds for optimal policies in certain cases; we also provide a simple counterexample to the intuitive conjecture that an optimal policy will necessarily reach a decision in all cases. We then derive heuristic approximations in both Bayesian and distribution-free settings and demonstrate their superiority to bandit-based heuristics in one-shot decision problems and in Go.
AB - Sequential decision problems are often approximately solvable by simulating possible future action sequences. Metalevel decision procedures have been developed for selecting which action sequences to simulate, based on estimating the expected improvement in decision quality that would result from any particular simulation; an example is the recent work on using bandit algorithms to control Monte Carlo tree search in the game of Go. In this paper we develop a theoretical basis for metalevel decisions in the statistical framework of Bayesian selection problems, arguing (as others have done) that this is more appropriate than the bandit framework. We derive a number of basic results applicable to Monte Carlo selection problems, including the first finite sampling bounds for optimal policies in certain cases; we also provide a simple counterexample to the intuitive conjecture that an optimal policy will necessarily reach a decision in all cases. We then derive heuristic approximations in both Bayesian and distribution-free settings and demonstrate their superiority to bandit-based heuristics in one-shot decision problems and in Go.
UR - http://www.scopus.com/inward/record.url?scp=84886054445&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:84886054445
SN - 9780974903989
T3 - Uncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012
SP - 346
EP - 355
BT - Uncertainty in Artificial Intelligence - Proceedings of the 28th Conference, UAI 2012
Y2 - 15 August 2012 through 17 August 2012
ER -