## Abstract

The design of methods for inference from time sequences has traditionally relied on statistical models that describe the relation between a latent desired sequence and the observed one. A broad family of model-based algorithms have been derived to carry out inference at controllable complexity using recursive computations over the factor graph representing the underlying distribution. An alternative model-agnostic approach utilizes machine learning (ML) methods. Here we propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences. In the proposed approach, neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence, rather than the complete inference task. By exploiting stationary properties of this distribution, the resulting approach can be applied to sequences of varying temporal duration. Learned factor graphs can be realized using compact neural networks that are trainable using small training sets, or alternatively, be used to improve upon existing deep inference systems. We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data, and can be applied to sequences of different lengths. Our experimental results demonstrate the ability of the proposed learned factor graphs to learn from small training sets to carry out accurate inference for sleep stage detection using the Sleep-EDF dataset, as well as for symbol detection in digital communications with unknown channels.

Original language | English |
---|---|

Pages (from-to) | 366-380 |

Number of pages | 15 |

Journal | IEEE Transactions on Signal Processing |

Volume | 70 |

DOIs | |

State | Published - 31 Dec 2021 |

## Keywords

- Factor graphs
- deep learning
- time sequences

## ASJC Scopus subject areas

- Signal Processing
- Electrical and Electronic Engineering