Abstract
Distributed optimization is a fundamental framework for collaborative inference over networks. The operation is modeled as the joint minimization of a shared objective which typically depends on local observations. Distributed optimization algorithms, such as the distributed alternating direction method of multipliers (D-ADMM), iteratively combine local computations and message exchanges. A main challenge associated with distributed optimization, and particularly with D-ADMM, is that it requires a large number of communications to reach consensus. In this work we propose unfolded D-ADMM, which follows the emerging deep unfolding methodology to enable D-ADMM to operate reliably with a predefined and small number of messages exchanged by each agent. Unfolded D-ADMM fully preserves the operation of D-ADMM, while leveraging data to tune the hyperparameters of each iteration. These hyperparameters can either be agent-specific, aiming at achieving the best performance within a fixed number of iterations over a given network, or shared among the agents, allowing to learn to distributedly optimize over different networks. We specialize unfolded D-ADMM for two representative settings: a distributed sparse recovery setup, and a distributed machine learning learning scenario. Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising on its performance.
| Original language | English |
|---|---|
| Pages (from-to) | 3012-3024 |
| Number of pages | 13 |
| Journal | IEEE Transactions on Mobile Computing |
| Volume | 24 |
| Issue number | 4 |
| DOIs | |
| State | Published - 1 Jan 2025 |
Keywords
- ADMM
- deep unfolding
- distributed optimization
ASJC Scopus subject areas
- Software
- Computer Networks and Communications
- Electrical and Electronic Engineering