Sampling the variational posterior with local refinement

  • Marton Havasi
  • , Jasper Snoek
  • , Dustin Tran
  • , Jonathan Gordon
  • , José Miguel Hernández-Lobato

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Variational inference is an optimization-based method for approximating the posterior distribution of the parameters in Bayesian probabilistic models. A key challenge of variational inference is to approximate the posterior with a distribution that is computationally tractable yet sufficiently expressive. We propose a novel method for generating samples from a highly flexible variational approximation. The method starts with a coarse initial approximation and generates samples by refining it in selected, local regions. This allows the samples to capture dependencies and multi-modality in the posterior, even when these are absent from the initial approximation. We demonstrate theoretically that our method always improves the quality of the approximation (as measured by the evidence lower bound). In experiments, our method consistently outperforms recent variational inference methods in terms of log-likelihood and ELBO across three example tasks: the Eight-Schools example (an inference task in a hierarchical model), training a ResNet-20 (Bayesian inference in a large neural network), and the Mushroom task (posterior sampling in a contextual bandit problem).

Original languageEnglish
Article number1475
JournalEntropy
Volume23
Issue number11
DOIs
StatePublished - 1 Nov 2021
Externally publishedYes

Keywords

  • Bayesian inference
  • Contextual bandits
  • Deep neural networks
  • Variational inference

ASJC Scopus subject areas

  • Information Systems
  • Mathematical Physics
  • Physics and Astronomy (miscellaneous)
  • General Physics and Astronomy
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Sampling the variational posterior with local refinement'. Together they form a unique fingerprint.

Cite this