TY - GEN
T1 - Distributed Admm with Limited Communications Via Deep Unfolding
AU - Noah, Yoav
AU - Shlezinger, Nir
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Distributed optimization arises in various applications. A widely-used distributed optimizer is the distributed alternating direction method of multipliers (D-ADMM) algorithm, which enables agents to jointly minimize a shared objective by iteratively combining local computations and message exchanges. However, D-ADMM often involves a large number of possibly costly communications to reach convergence, limiting its applicability in communications-constrained networks. In this work we propose unfolded D-ADMM, which facilitates the application of D-ADMM with limited communications using the emerging deep unfolding methodology. We utilize the conventional D-ADMM algorithm with a fixed number of communications rounds, while leveraging data to tune the hyperparameters of each iteration of the algorithm. By doing so, we learn to optimize with limited communications, while preserving the interpretability and flexibility of the original D-ADMM algorithm. Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising on its performance.
AB - Distributed optimization arises in various applications. A widely-used distributed optimizer is the distributed alternating direction method of multipliers (D-ADMM) algorithm, which enables agents to jointly minimize a shared objective by iteratively combining local computations and message exchanges. However, D-ADMM often involves a large number of possibly costly communications to reach convergence, limiting its applicability in communications-constrained networks. In this work we propose unfolded D-ADMM, which facilitates the application of D-ADMM with limited communications using the emerging deep unfolding methodology. We utilize the conventional D-ADMM algorithm with a fixed number of communications rounds, while leveraging data to tune the hyperparameters of each iteration of the algorithm. By doing so, we learn to optimize with limited communications, while preserving the interpretability and flexibility of the original D-ADMM algorithm. Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising on its performance.
KW - Distributed optimization
KW - deep unfolding
UR - http://www.scopus.com/inward/record.url?scp=86000374125&partnerID=8YFLogxK
U2 - 10.1109/ICASSP49357.2023.10096455
DO - 10.1109/ICASSP49357.2023.10096455
M3 - Conference contribution
AN - SCOPUS:86000374125
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
BT - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
PB - Institute of Electrical and Electronics Engineers
T2 - 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Y2 - 4 June 2023 through 10 June 2023
ER -