Abstract
Directed information (DI) is a fundamental measure for the study and analysis of sequential stochastic models. In particular, when optimized over input distributions it characterizes the capacity of general communication channels. However, analytic computation of DI is typically intractable and existing optimization techniques over discrete input alphabets require knowledge of the channel model, which renders them inapplicable when only samples are available. To overcome these limitations, we propose a novel optimization framework for estimated DI over discrete spaces. We formulate DI optimization as a Markov decision process and leverage reinforcement learning techniques to optimize a deep generative model of the input process probability mass function (PMF). Combining this optimizer with the recently developed DI neural estimator, we obtain an alternating optimization algorithm which is applied to estimating the (feedforward and feedback) capacity of various discrete channels with memory. Furthermore, we demonstrate how to use the optimized PMF model to (i) obtain theoretical bounds on the feedback capacity of unifilar finite-state channels; and (ii) perform probabilistic shaping of constellations in the peak power-constrained additive white Gaussian noise channel.
Original language | English |
---|---|
Pages (from-to) | 1652-1670 |
Number of pages | 19 |
Journal | IEEE Transactions on Information Theory |
Volume | 70 |
Issue number | 3 |
DOIs | |
State | Published - 1 Mar 2024 |
Keywords
- Alternating optimization
- channel capacity
- directed information
- generative modeling
- neural estimation
- probabilistic shaping
- reinforcement learning
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Library and Information Sciences