Active Deep Decoding of Linear Codes

Ishay Be'ery, Nir Raviv, Tomer Raviv, Yair Be'ery

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

High quality data is essential in deep learning to train a robust model. While in other fields data is sparse and costly to collect, in error decoding it is free to query and label thus allowing potential data exploitation. Utilizing this fact and inspired by active learning, two novel methods are introduced to improve Weighted Belief Propagation (WBP) decoding. These methods incorporate machine-learning concepts with error decoding measures. For BCH(63,36), (63,45) and (127,64) codes, with cycle-reduced parity-check matrices, improvement of up to 0.4dB at the waterfall region, and of up to 1.5dB at the error-floor region in FER, over the original WBP, is demonstrated by smartly sampling the data, without increasing inference (decoding) complexity. The proposed methods constitutes an example guidelines for model enhancement by incorporation of domain knowledge from error-correcting field into a deep learning model. These guidelines can be adapted to any other deep learning based communication block.

Original languageEnglish
Article number8911465
Pages (from-to)728-736
Number of pages9
JournalIEEE Transactions on Communications
Volume68
Issue number2
DOIs
StatePublished - 1 Feb 2020
Externally publishedYes

Keywords

  • Deep learning
  • active learning
  • belief propagation
  • error correcting codes
  • machine learning

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Active Deep Decoding of Linear Codes'. Together they form a unique fingerprint.

Cite this