TY - GEN
T1 - Subgradient Descent Learning with Over-the-Air Computation
AU - Gez, Tamir L.S.
AU - Cohen, Kobi
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - We consider a distributed learning problem in a communication network, consisting of N distributed nodes and a central parameter server (PS). The computation is made by the PS and is based on received data from the nodes which transmit over a multiple access channel (MAC). The objective function is a sum of the nodes' local loss functions. This problem has attracted a growing interest in distributed sensing systems, and more recently in federated learning (FL). However, existing methods rely on the assumption that the loss functions are continuously differentiable. In this paper, we first tackle the problem when this assumption does not necessarily hold. We develop a novel algorithm, dubbed Sub-Gradient descent Multiple Access (SGMA), to solve the learning problem over MAC. In SGMA, each node transmits an analog shaped waveform of its local subgradient over MAC and the PS receives a superposition of the noisy analog signals, resulting in a bandwidth-efficient over-the-air (OTA) computation used to update the learned model. We analyze the performance of SGMA, and prove that it approaches the convergence rate of the centralized subgradient algorithm in large networks. Simulation results using real datasets demonstrate the efficiency of SGMA.
AB - We consider a distributed learning problem in a communication network, consisting of N distributed nodes and a central parameter server (PS). The computation is made by the PS and is based on received data from the nodes which transmit over a multiple access channel (MAC). The objective function is a sum of the nodes' local loss functions. This problem has attracted a growing interest in distributed sensing systems, and more recently in federated learning (FL). However, existing methods rely on the assumption that the loss functions are continuously differentiable. In this paper, we first tackle the problem when this assumption does not necessarily hold. We develop a novel algorithm, dubbed Sub-Gradient descent Multiple Access (SGMA), to solve the learning problem over MAC. In SGMA, each node transmits an analog shaped waveform of its local subgradient over MAC and the PS receives a superposition of the noisy analog signals, resulting in a bandwidth-efficient over-the-air (OTA) computation used to update the learned model. We analyze the performance of SGMA, and prove that it approaches the convergence rate of the centralized subgradient algorithm in large networks. Simulation results using real datasets demonstrate the efficiency of SGMA.
KW - Distributed learning
KW - federated learning (FL)
KW - gradient descent (GD)-type learning
KW - multiple access channel (MAC)
KW - over-the-air (OTA) computation
KW - subgradient methods
UR - http://www.scopus.com/inward/record.url?scp=85172384711&partnerID=8YFLogxK
U2 - 10.1109/ICASSP49357.2023.10095134
DO - 10.1109/ICASSP49357.2023.10095134
M3 - Conference contribution
AN - SCOPUS:85172384711
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
BT - ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Proceedings
PB - Institute of Electrical and Electronics Engineers
T2 - 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Y2 - 4 June 2023 through 10 June 2023
ER -