TY - GEN
T1 - Learning Multi-Rate Vector Quantization for Remote Deep Inference
AU - Malka, May
AU - Ginzach, Shai
AU - Shlezinger, Nir
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Remote inference accommodates a broad range of scenarios, where inference is carried out using data acquired at a remote user. When the sensing and inferring users communicate over rate limited channels, compression of the data reduces latency, and deep learning enables to jointly learn the compression encoding along with the inference rule. However, because the data is compressed into a fixed number of bits, the resolution cannot be adapted to changes in channel conditions. In this work we propose a multi-rate remote deep inference scheme, which trains a single encoder-decoder model that uses learned vector quantizers while supporting different quantization levels. Our scheme is based on designing nested codebooks along with a learning algorithm based on progressive learning. Numerical results demonstrate that the proposed scheme yields remote deep inference that operates with multiple rates while approaching the performance of fixed-rate models.
AB - Remote inference accommodates a broad range of scenarios, where inference is carried out using data acquired at a remote user. When the sensing and inferring users communicate over rate limited channels, compression of the data reduces latency, and deep learning enables to jointly learn the compression encoding along with the inference rule. However, because the data is compressed into a fixed number of bits, the resolution cannot be adapted to changes in channel conditions. In this work we propose a multi-rate remote deep inference scheme, which trains a single encoder-decoder model that uses learned vector quantizers while supporting different quantization levels. Our scheme is based on designing nested codebooks along with a learning algorithm based on progressive learning. Numerical results demonstrate that the proposed scheme yields remote deep inference that operates with multiple rates while approaching the performance of fixed-rate models.
KW - Remote inference
KW - adaptive compression
UR - http://www.scopus.com/inward/record.url?scp=85168248015&partnerID=8YFLogxK
U2 - 10.1109/ICASSPW59220.2023.10193526
DO - 10.1109/ICASSPW59220.2023.10193526
M3 - Conference contribution
AN - SCOPUS:85168248015
T3 - ICASSPW 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, Proceedings
BT - ICASSPW 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, Proceedings
PB - Institute of Electrical and Electronics Engineers
T2 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, ICASSPW 2023
Y2 - 4 June 2023 through 10 June 2023
ER -