TY - GEN
T1 - Single Letter Formulas for Quantized Compressed Sensing with Gaussian Codebooks
AU - Kipnis, Alon
AU - Reeves, Galen
AU - Eldar, Yonina C.
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/15
Y1 - 2018/8/15
N2 - Theoretical and experimental results have shown that compressed sensing with quantization can perform well if the signal is very sparse, the noise is very low, and the bitrate is sufficiently large. However, a precise characterization of the fundamental tradeoffs between these quantities has remained elusive. In our previous work, we considered a quantization scheme that first computes the conditional expectation of the signal. In this paper, we focus on a different approach in which the measurements are encoded directly using Gaussian codebooks. We show that that mean-square error (MSE) distortion of this approach can be analyzed by studying a degraded measurement model without any bitrate constraints. Building upon ideas from statistical physics and random matrix theory, we then provide single-letter formulas for the reconstruction error associated with optimal decoding. These formulas provide an explicit characterization of the mean-squared error (MSE) as a function of: (1) the average quantization bitrate, (2) the prior distribution of the signal, and (3) the spectral distribution of the sensing matrix. These formulas provide upper bounds on the fundamental limits of compressed sensing with quantization. Interestingly, it is shown that in some problem regimes, this method achieves the best known performance, even though the encoding stage does not use any information about the signal distribution other than its mean and variance.
AB - Theoretical and experimental results have shown that compressed sensing with quantization can perform well if the signal is very sparse, the noise is very low, and the bitrate is sufficiently large. However, a precise characterization of the fundamental tradeoffs between these quantities has remained elusive. In our previous work, we considered a quantization scheme that first computes the conditional expectation of the signal. In this paper, we focus on a different approach in which the measurements are encoded directly using Gaussian codebooks. We show that that mean-square error (MSE) distortion of this approach can be analyzed by studying a degraded measurement model without any bitrate constraints. Building upon ideas from statistical physics and random matrix theory, we then provide single-letter formulas for the reconstruction error associated with optimal decoding. These formulas provide an explicit characterization of the mean-squared error (MSE) as a function of: (1) the average quantization bitrate, (2) the prior distribution of the signal, and (3) the spectral distribution of the sensing matrix. These formulas provide upper bounds on the fundamental limits of compressed sensing with quantization. Interestingly, it is shown that in some problem regimes, this method achieves the best known performance, even though the encoding stage does not use any information about the signal distribution other than its mean and variance.
UR - https://www.scopus.com/pages/publications/85052236080
U2 - 10.1109/ISIT.2018.8437761
DO - 10.1109/ISIT.2018.8437761
M3 - Conference contribution
AN - SCOPUS:85052236080
SN - 9781538647806
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 71
EP - 75
BT - 2018 IEEE International Symposium on Information Theory, ISIT 2018
PB - Institute of Electrical and Electronics Engineers
T2 - 2018 IEEE International Symposium on Information Theory, ISIT 2018
Y2 - 17 June 2018 through 22 June 2018
ER -