Communication-Efficient Federated Learning via Sparse Training with Regularized Error Correction

Ran Greidi, Kobi Cohen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Learning (FL) is an emerging paradigm that allows for decentralized machine learning (ML), where multiple models are collaboratively trained in a privacy-preserving manner. However, since communications and computation resources are limited, training DNN models in FL systems face challenges such as elevated computational and communication costs in complex tasks. Sparse training schemes gain increasing attention in order to scale down the dimensionality of each client (i.e., node) transmission. Specifically, sparsification with error correction methods is a promising technique, where only important updates are sent to the parameter server (PS) and the rest are accumulated locally. While error correction methods have shown to achieve a significant sparsification level of the client-to-PS message without harming convergence, pushing sparsity further remains unresolved due to the staleness effect. In this paper, we propose a novel algorithm, dubbed Federated Learning with Accumulated Regularized Embeddings (FLARE), to overcome this challenge. FLARE presents a novel sparse training approach via accumulated pulling of the updated models with regularization on the embeddings in the FL process, providing a powerful solution to the staleness effect, and pushing sparsity to an exceptional level. Our theoretical analysis demonstrates that FLARE's regularized error feedback achieves significant improvements in scalability with sparsity parameter. The performance of FLARE is validated through experiments on diverse and complex models, achieving a remarkable sparsity level (10 times and more beyond the current state-of-the-art) along with significantly improved accuracy.

Original languageEnglish
Title of host publication2024 60th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2024
PublisherInstitute of Electrical and Electronics Engineers
ISBN (Electronic)9798331541033
DOIs
StatePublished - 1 Jan 2024
Event60th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2024 - Urbana, United States
Duration: 24 Sep 202427 Sep 2024

Publication series

Name2024 60th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2024

Conference

Conference60th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2024
Country/TerritoryUnited States
CityUrbana
Period24/09/2427/09/24

Keywords

  • communication-efficiency
  • Deep learning
  • deep neural network (DNN)
  • federated learning (FL)
  • sparse training

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence
  • Control and Optimization

Fingerprint

Dive into the research topics of 'Communication-Efficient Federated Learning via Sparse Training with Regularized Error Correction'. Together they form a unique fingerprint.

Cite this