Learn to Rapidly and Robustly Optimize Hybrid Precoding

Ortal Lavi, Nir Shlezinger

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Hybrid precoding plays a key role in realizing massive multiple-input multiple-output (MIMO) transmitters with controllable cost. MIMO precoders are required to frequently adapt based on the variations in the channel conditions. In hybrid MIMO, where precoding is comprised of digital and analog beamforming, such an adaptation involves lengthy optimization and depends on accurate channel state information (CSI). This affects the spectral efficiency when the channel varies rapidly and when operating with noisy CSI. In this work we employ deep learning techniques to learn how to rapidly and robustly optimize hybrid precoders, while being fully interpretable. We leverage data to learn iteration-dependent hyperparameter settings of projected gradient sum-rate optimization with a predefined number of iterations. The algorithm maps channel realizations into hybrid precoding settings while preserving the interpretable flow of the optimizer and improving its inference speed. To cope with noisy CSI, we learn to optimize the minimal achievable sum-rate among all tolerable errors, proposing a hybrid precoder based on the projected conceptual mirror prox minimax optimizer. Numerical results demonstrate that our approach allows using over ten times less iterations compared to that required by conventional optimization with shared hyperparameters, while achieving similar and even improved performance.

Original languageEnglish
Pages (from-to)5814-5830
Number of pages17
JournalIEEE Transactions on Communications
Volume71
Issue number10
DOIs
StatePublished - 5 Jul 2023

Keywords

  • Hybrid beamforming
  • deep unfolding
  • robust optimization

ASJC Scopus subject areas

  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Learn to Rapidly and Robustly Optimize Hybrid Precoding'. Together they form a unique fingerprint.

Cite this