TY - GEN
T1 - Joint Privacy Enhancement and Quantization in Federated Learning
AU - Lang, Natalie
AU - Shlezinger, Nir
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices. Among the key challenges associated with FL are first the need to preserve the privacy of the local data sets, and second the communication load due to the repeated exchange of updated models; both are often tackled individually with methods whose operation distorts the updated models, e.g., local differential privacy (LDP) mechanisms and lossy compres- sion, respectively. In this work we propose a method for joint privacy enhancement and quantization (JoPEQ), unifying lossy compression and privacy enhancement for FL. JoPEQ utilizes universal vector quantization, where distortion is statistically equivalent to additive noise, and augments the compression distortion with dedicated privacy preserving noise to simultaneously achieve compression and a desired privacy level. We numerically demonstrate that JoPEQ reduces the overall distortion compared to individual LDP and compression, which is translated into improved trained models.
AB - Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices. Among the key challenges associated with FL are first the need to preserve the privacy of the local data sets, and second the communication load due to the repeated exchange of updated models; both are often tackled individually with methods whose operation distorts the updated models, e.g., local differential privacy (LDP) mechanisms and lossy compres- sion, respectively. In this work we propose a method for joint privacy enhancement and quantization (JoPEQ), unifying lossy compression and privacy enhancement for FL. JoPEQ utilizes universal vector quantization, where distortion is statistically equivalent to additive noise, and augments the compression distortion with dedicated privacy preserving noise to simultaneously achieve compression and a desired privacy level. We numerically demonstrate that JoPEQ reduces the overall distortion compared to individual LDP and compression, which is translated into improved trained models.
UR - https://www.scopus.com/pages/publications/85132256953
U2 - 10.1109/ISIT50566.2022.9834551
DO - 10.1109/ISIT50566.2022.9834551
M3 - Conference contribution
AN - SCOPUS:85132256953
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 2040
EP - 2045
BT - 2022 IEEE International Symposium on Information Theory, ISIT 2022
PB - Institute of Electrical and Electronics Engineers
T2 - 2022 IEEE International Symposium on Information Theory, ISIT 2022
Y2 - 26 June 2022 through 1 July 2022
ER -