Iterative Planning for Deterministic QDec-POMDPs.

Sagi Bazinin, Guy Shani

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

QDec-POMDPs are a qualitative alternative to stochastic Dec-POMDPs for goal-oriented plan- ning in cooperative partially observable multi-agent environments. Although QDec-POMDPs share the same worst case complexity as Dec-POMDPs, previous research has shown an ability to scale up to larger domains while producing high quality plan trees. A key difficulty in distributed execution is the need to construct a joint plan tree branching on the combinations of observations of all agents. In this work, we suggest an iterative algorithm, IMAP, that plans for one agent at a time, taking into considerations collaboration constraints about action execution of previous agents, and generating new constraints for the next agents. We explain how these constraints are generated and handled, and a backtracking mechanism for changing constraints that cannot be met. We provide experimental results on multi-agent planning domains, showing our methods to scale to much larger problems with several collaborating agents and huge state spaces.
Original languageEnglish
Title of host publicationGCAI-2018 4th Global Conference on Artificial Intelligence
Pages15-28
Number of pages14
DOIs
StatePublished - 2018

Fingerprint

Dive into the research topics of 'Iterative Planning for Deterministic QDec-POMDPs.'. Together they form a unique fingerprint.

Cite this