Enhancing Deep Reinforcement Learning with Scenario-Based Modeling

Raz Yerushalmi, Guy Amir, Achiya Elyasaf, David Harel, Guy Katz, Assaf Marron

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Deep reinforcement learning agents have achieved unprecedented results when learning to generalize from unstructured data. However, the “black-box” nature of the trained DRL agents makes it difficult to ensure that they adhere to various requirements posed by engineers. In this work, we put forth a novel technique for enhancing the reinforcement learning training loop, and specifically—its reward function, in a way that allows engineers to directly inject their expert knowledge into the training process. This allows us to make the trained agent adhere to multiple constraints of interest. Moreover, using scenario-based modeling techniques, our method allows users to formulate the defined constraints using advanced, well-established, behavioral modeling methods. This combination of such modeling methods together with ML learning tools produces agents that are both high performing and more likely to adhere to prescribed constraints. Furthermore, the resulting agents are more transparent and hence more maintainable. We demonstrate our technique by evaluating it on a case study from the domain of internet congestion control, and present promising results.

Original languageEnglish
Article number156
JournalSN Computer Science
Issue number2
StatePublished - 11 Jan 2023


  • Deep reinforcement learning
  • Domain expertise
  • Machine learning
  • Rule-based specifications
  • Scenario-based modeling

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Networks and Communications
  • Computer Science Applications
  • General Computer Science
  • Artificial Intelligence
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Enhancing Deep Reinforcement Learning with Scenario-Based Modeling'. Together they form a unique fingerprint.

Cite this