TY - GEN
T1 - Symbol-Level Online Channel Tracking for Deep Receivers.
AU - Finish, Ron Aharon
AU - Cohen, Yoav
AU - Raviv, Tomer
AU - Shlezinger, Nir
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2022/4/27
Y1 - 2022/4/27
N2 - Deep neural networks (DNNs) allow digital receivers to operate in complex environments by learning from data corresponding to the channel input-output relationship. Since communication channels change over time, DNN-aided receivers may be required to retrain periodically, which conventionally involves excessive pilot signaling at the cost of reduced spectral efficiency. In this paper, we study how one can obtain data for retraining deep receivers without sending pilots or relying on specific protocol redundancies, by combining self-supervision with active learning concepts. We focus on the recently proposed ViterbiNet receiver, which integrates into the Viterbi algorithm a DNN for learning the channel. To enable self-supervision, we use the soft-output Viterbi algorithm to evaluate the decision confidence for each of the detected symbols in a given word. Then, to overcome learning with erroneous data, we choose a subset of the recovered symbols to be used for retraining via active learning. The proposed method selects decision-directed data whose confidence is not too low to result in inaccurate labeling, yet not too high to preserve sufficient diversity of the data. We demonstrate that self-supervised symbol-level training yields a performance within a small gap of the Viterbi algorithm with instantaneous channel knowledge.
AB - Deep neural networks (DNNs) allow digital receivers to operate in complex environments by learning from data corresponding to the channel input-output relationship. Since communication channels change over time, DNN-aided receivers may be required to retrain periodically, which conventionally involves excessive pilot signaling at the cost of reduced spectral efficiency. In this paper, we study how one can obtain data for retraining deep receivers without sending pilots or relying on specific protocol redundancies, by combining self-supervision with active learning concepts. We focus on the recently proposed ViterbiNet receiver, which integrates into the Viterbi algorithm a DNN for learning the channel. To enable self-supervision, we use the soft-output Viterbi algorithm to evaluate the decision confidence for each of the detected symbols in a given word. Then, to overcome learning with erroneous data, we choose a subset of the recovered symbols to be used for retraining via active learning. The proposed method selects decision-directed data whose confidence is not too low to result in inaccurate labeling, yet not too high to preserve sufficient diversity of the data. We demonstrate that self-supervised symbol-level training yields a performance within a small gap of the Viterbi algorithm with instantaneous channel knowledge.
KW - Active learning
KW - self-supervision
KW - Viterbi algorithm
KW - Training
KW - Signal processing algorithms
KW - spectral efficiency
UR - http://www.scopus.com/inward/record.url?scp=85131248722&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9747026
DO - 10.1109/ICASSP43922.2022.9747026
M3 - Conference contribution
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 8897
EP - 8901
BT - IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP
PB - Institute of Electrical and Electronics Engineers
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
Y2 - 23 May 2022 through 27 May 2022
ER -