Bob and Alice Go to a Bar: Reasoning About Future With Probabilistic Programs

David Tolpin, Tomer Dobkin

Research output: Working paper/PreprintPreprint

Abstract

It is well known that reinforcement learning can be cast as inference in an appropriate probabilistic model. However, this commonly involves introducing a distribution over agent trajectories with probabilities proportional to exponentiated rewards. In this work, we formulate reinforcement learning as Bayesian inference without resorting to rewards, and show that rewards are derived from agent's preferences, rather than the other way around. We argue that agent preferences should be specified stochastically rather than deterministically. Reinforcement learning via inference with stochastic preferences naturally describes agent behaviors, does not require introducing rewards and exponential weighing of trajectories, and allows to reason about agents using the solid foundation of Bayesian statistics. Stochastic conditioning, a probabilistic programming paradigm for conditioning models on distributions rather than values, is the formalism behind agents with probabilistic preferences. We demonstrate realization of our approach on case studies using both a two-agent coordinate game and a single agent acting in a noisy environment, showing that despite superficial differences, both cases can be modeled and reasoned about based on the same principles.
Original languageEnglish
DOIs
StatePublished - 6 Oct 2021

Fingerprint

Dive into the research topics of 'Bob and Alice Go to a Bar: Reasoning About Future With Probabilistic Programs'. Together they form a unique fingerprint.

Cite this