Abstract
This extended abstract introduces a novel setting of reinforcement learning with constraints, called Relaxed Exploration Constrained Reinforcement Learning (RECRL). As in standard constrained reinforcement learning (CRL), the aim is to find a policy that maximizes environmental return subject to a set of constraints. However, in RECRL there is an initial training phase in which the constraints are relaxed, thus the agent can explore the environment more freely. When training is done, the agent is deployed in the environment and is required to fully satisfy all constraints. As an initial approach to RECRL problems, we introduce a curriculum-based approach, named CLiC, that can be applied to existing CRL algorithms to improve their exploration during the training phase while allowing them to gradually converge to a policy that satisfies the full set of constraints. Empirical evaluation shows that CLiC produces policies with a higher return during deployment than policies learned when training is done using only the strict set of constraints.
Original language | English |
---|---|
Pages (from-to) | 2821-2823 |
Number of pages | 3 |
Journal | Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS |
Volume | 2023-May |
State | Published - 1 Jan 2023 |
Event | 22nd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2023 - London, United Kingdom Duration: 29 May 2023 → 2 Jun 2023 |
Keywords
- Constrained Reinforcement Learning
- Curriculum Learning
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering