Distributed Gibbs: A linear-space sampling-based DCOP algorithm

Duc Thien Nguyen, William Yeoh, Hoong Chuin Lau, Roie Zivan

Research output: Contribution to journalArticlepeer-review

42 Scopus citations


Researchers have used distributed constraint optimization problems (DCOPs) to model various multi-agent coordination and resource allocation problems. Very recently, Ottens et al. proposed a promising new approach to solve DCOPs that is based on confidence bounds via their Distributed UCT (DUCT) sampling-based algorithm. Unfortunately, its memory requirement per agent is exponential in the number of agents in the problem, which prohibits it from scaling up to large problems. Thus, in this article, we introduce two new sampling-based DCOP algorithms called Sequential Distributed Gibbs (SD-Gibbs) and Parallel Distributed Gibbs (PD-Gibbs). Both algorithms have memory requirements per agent that is linear in the number of agents in the problem. Our empirical results show that our algorithms can find solutions that are better than DUCT, run faster than DUCT, and solve some large problems that DUCT failed to solve due to memory limitations.

Original languageEnglish
Pages (from-to)705-748
Number of pages44
JournalJournal Of Artificial Intelligence Research
StatePublished - 1 Mar 2019

ASJC Scopus subject areas

  • Artificial Intelligence


Dive into the research topics of 'Distributed Gibbs: A linear-space sampling-based DCOP algorithm'. Together they form a unique fingerprint.

Cite this