Constrained Intrinsic Motivation for Reinforcement Learning
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24) |
Editors | Kate Larson |
Publisher | International Joint Conferences on Artificial Intelligence |
Pages | 5608-5616 |
ISBN (electronic) | 978-1-956792-04-1 |
Publication status | Published - Aug 2024 |
Publication series
Name | IJCAI International Joint Conference on Artificial Intelligence |
---|---|
ISSN (Print) | 1045-0823 |
Conference
Title | 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024) |
---|---|
Location | International Convention Center Jeju |
Place | Korea, Republic of |
City | Jeju Island |
Period | 3 - 9 August 2024 |
Link(s)
Abstract
This paper investigates two fundamental problems that arise when utilizing Intrinsic Motivation (IM) for reinforcement learning in Reward-Free Pre-Training (RFPT) tasks and Exploration with Intrinsic Motivation (EIM) tasks: 1) how to design an effective intrinsic objective in RFPT tasks, and 2) how to reduce the bias introduced by the intrinsic objective in EIM tasks. Existing IM methods suffer from static skills, limited state coverage, sample inefficiency in RFPT tasks, and suboptimality in EIM tasks. To tackle these problems, we propose Constrained Intrinsic Motivation (CIM) for RFPT and EIM tasks, respectively: 1) CIM for RFPT maximizes the lower bound of the conditional state entropy subject to an alignment constraint on the state encoder network for efficient dynamic and diverse skill discovery and state coverage maximization; 2) CIM for EIM leverages constrained policy optimization to adaptively adjust the coefficient of the intrinsic objective to mitigate the distraction from the intrinsic objective. In various MuJoCo robotics environments, we empirically show that CIM for RFPT greatly surpasses fifteen IM methods for unsupervised skill discovery in terms of skill diversity, state coverage, and fine-tuning performance. Additionally, we showcase the effectiveness of CIM for EIM in redeeming intrinsic rewards when task rewards are exposed from the beginning. Our code is available at https://github.com/x-zheng16/CIM. © 2024 International Joint Conferences on Artificial Intelligence. All rights reserved.
Research Area(s)
- Reinforcement Learning, Intrinsic Motivation, Unsupervised Skill Discovery
Bibliographic Note
Information for this record is supplemented by the author(s) concerned.
Citation Format(s)
Constrained Intrinsic Motivation for Reinforcement Learning. / Zheng, Xiang; Ma, Xingjun; Shen, Chao et al.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24). ed. / Kate Larson. International Joint Conferences on Artificial Intelligence, 2024. p. 5608-5616 (IJCAI International Joint Conference on Artificial Intelligence).
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI-24). ed. / Kate Larson. International Joint Conferences on Artificial Intelligence, 2024. p. 5608-5616 (IJCAI International Joint Conference on Artificial Intelligence).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review