Partially observable Markov decision processes (POMDPs) are an appealing tool for modeling planning problems under uncertainty. They incorporate stochastic action and sensor descriptions and easily capture goal oriented and process oriented tasks. Unfortunately, POMDPs are very difficult to solve. Exact methods cannot handle problems with much more than 10 states, so approximate methods must be used. In this paper, we describe a simple variable-grid solution method which yields good results on relatively large problems with modest computational effort.
|Number of pages||7|
|State||Published - 1 Dec 1997|
|Event||Proceedings of the 1997 14th National Conference on Artificial Intelligence, AAAI 97 - Providence, RI, USA|
Duration: 27 Jul 1997 → 31 Jul 1997
|Conference||Proceedings of the 1997 14th National Conference on Artificial Intelligence, AAAI 97|
|City||Providence, RI, USA|
|Period||27/07/97 → 31/07/97|