TY - JOUR
T1 - Domain independent heuristics for online stochastic contingent planning
AU - Blumenthal, Oded
AU - Shani, Guy
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - Partially observable Markov decision processes (POMDP) are a useful model for decision-making under partial observability and stochastic actions. Partially Observable Monte-Carlo Planning (POMCP) is an online algorithm for deciding on the next action to perform, using a Monte-Carlo tree search approach, based on the UCT algorithm for fully observable Markov-decision processes. POMCP develops an action-observation tree, and at the leaves, uses a rollout policy to provide a value estimate for the leaf. As such, POMCP is highly dependent on the rollout policy to compute good estimates, and hence identify good actions. Thus, many practitioners who use POMCP are required to create strong, domain-specific heuristics. In this paper, we model POMDPs as stochastic contingent planning problems. This allows us to leverage domain-independent heuristics that were developed in the planning community. We suggest two heuristics, the first is based on the well-known hadd heuristic from classical planning, and the second is computed in belief space, taking the value of information into account.
AB - Partially observable Markov decision processes (POMDP) are a useful model for decision-making under partial observability and stochastic actions. Partially Observable Monte-Carlo Planning (POMCP) is an online algorithm for deciding on the next action to perform, using a Monte-Carlo tree search approach, based on the UCT algorithm for fully observable Markov-decision processes. POMCP develops an action-observation tree, and at the leaves, uses a rollout policy to provide a value estimate for the leaf. As such, POMCP is highly dependent on the rollout policy to compute good estimates, and hence identify good actions. Thus, many practitioners who use POMCP are required to create strong, domain-specific heuristics. In this paper, we model POMDPs as stochastic contingent planning problems. This allows us to leverage domain-independent heuristics that were developed in the planning community. We suggest two heuristics, the first is based on the well-known hadd heuristic from classical planning, and the second is computed in belief space, taking the value of information into account.
KW - Contingent planning
KW - Heuristics
KW - Online planning
KW - POMDP
UR - http://www.scopus.com/inward/record.url?scp=85197771346&partnerID=8YFLogxK
U2 - 10.1007/s10472-024-09947-5
DO - 10.1007/s10472-024-09947-5
M3 - Article
AN - SCOPUS:85197771346
SN - 1012-2443
JO - Annals of Mathematics and Artificial Intelligence
JF - Annals of Mathematics and Artificial Intelligence
ER -