Learning to Operate in Open Worlds by Adapting Planning Models

Wiktor Piotrowski, Roni Stern, Yoni Sher, Jacob Le, Matthew Klenk, Johan deKleer, Shiwali Mohan

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Planning agents are ill-equipped to act in novel situations in which their domain model no longer accurately represents the world. We introduce an approach for such agents operating in open worlds that detects the presence of novelties and effectively adapts their domain models and consequent action selection. It uses observations of action execution and measures their divergence from what is expected, according to the environment model, to infer existence of a novelty. Then, it revises the model through a heuristics-guided search over model changes. We report empirical evaluations on the CartPole problem, a standard Reinforcement Learning (RL) benchmark. The results show that our approach can deal with a class of novelties very quickly and in an interpretable fashion.

Original languageEnglish
Pages (from-to)2610-2612
Number of pages3
JournalProceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Volume2023-May
StatePublished - 1 Jan 2023
Event22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023 - London, United Kingdom
Duration: 29 May 20232 Jun 2023

Keywords

  • Adaptive Agents
  • Model Repair
  • Open World Learning
  • Planning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Learning to Operate in Open Worlds by Adapting Planning Models'. Together they form a unique fingerprint.

Cite this