Example-guided learning of stochastic human driving policies using deep reinforcement learning

Ran Emuna, Rotem Duffney, Avinoam Borowsky, Armin Biess

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Deep reinforcement learning has been successfully applied to the generation of goal-directed behavior in artificial agents. However, existing algorithms are often not designed to reproduce human-like behavior, which may be desired in many environments, such as human–robot collaborations, social robotics and autonomous vehicles. Here we introduce a model-free and easy-to-implement deep reinforcement learning approach to mimic the stochastic behavior of a human expert by learning distributions of task variables from examples. As tractable use-cases, we study static and dynamic obstacle avoidance tasks for an autonomous vehicle on a highway road in simulation (Unity). Our control algorithm receives a feedback signal from two sources: a deterministic (handcrafted) part encoding basic task goals and a stochastic (data-driven) part that incorporates human expert knowledge. Gaussian processes are used to model human state distributions and to assess the similarity between machine and human behavior. Using this generic approach, we demonstrate that the learning agent acquires human-like driving skills and can generalize to new roads and obstacle distributions unseen during training.

Original languageEnglish
Pages (from-to)16791-16804
Number of pages14
JournalNeural Computing and Applications
Volume35
Issue number23
DOIs
StatePublished - 23 Dec 2022

Keywords

  • Deep reinforcement learning
  • Gaussian processes
  • Human driving policies
  • Imitation learning

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Example-guided learning of stochastic human driving policies using deep reinforcement learning'. Together they form a unique fingerprint.

Cite this