Abstract
Federated learning (FL) is a leading approach for iterative learning using possibly private data available at edge devices. The federated operation gives rise to challenges in privacy leakage, which accumulates in learning, and communication latency. These limitations are often individually mitigated by the introduction of privacy preserving noise and user-selection policies, typically at the cost of accuracy. In this work, we propose Privacy-aware Active User SElection (PAUSE), which balances the trade-off between privacy accumulation, communication latency, and optimization of the learned model, via dedicated user selection. This triplet is used to construct a reward (cost function), according to which a multi-armed bandit (MAB)-based algorithm dynamically chooses a subset of users in each round, while guaranteeing bounded accumulated privacy leakage. We establish a theoretical analysis, systematically showing that the reward growth rate of PAUSE follows the best-known rate in MAB literature. While the privacy guarantees hold by the construction of PAUSE, we numerically validate its associated improved latency and accuracy gains in different experimental settings of FL.
| Original language | English |
|---|---|
| Journal | Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing |
| DOIs | |
| State | Published - 1 Jan 2025 |
| Event | 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025 - Hyderabad, India Duration: 6 Apr 2025 → 11 Apr 2025 |
Keywords
- Federated learning
- multi-armed bandit
- privacy
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering