Abstract
Diffusion models have achieved remarkable success in sequential decision-making by leveraging the highly expressive model capabilities in policy learning. A central problem for learning diffusion policies is to align the policy output with human intents in various tasks. To achieve this, previous methods conduct return-conditioned policy generation or Reinforcement Learning (RL)-based policy optimization, while they both rely on pre-defined reward functions. In this work, we propose a novel framework, Forward KL regularized Preference optimization for aligning Diffusion policies, to align the diffusion policy with preferences directly. We first train a diffusion policy from the offline dataset without considering the preference, and then align the policy to the preference data via direct preference optimization. During the alignment phase, we formulate direct preference learning in a diffusion policy, where the forward KL regularization is employed in preference optimization to avoid generating out-of-distribution actions. We conduct extensive experiments for MetaWorld manipulation and D4RL tasks. The results show our method exhibits superior alignment with preferences and outperforms previous state-of-the-art algorithms. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Original language | English |
---|---|
Title of host publication | Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence |
Editors | Toby Walsh, Julie Shah, Zico Kolter |
Publisher | AAAI Press |
Pages | 14386-14395 |
Volume | 39 |
ISBN (Electronic) | 1-57735-897-X, 978-1-57735-897-8 |
DOIs | |
Publication status | Published - 2025 |
Externally published | Yes |
Event | 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) - Pennsylvania Convention Center , Philadelphia, United States Duration: 25 Feb 2025 → 4 Mar 2025 https://aaai.org/conference/aaai/aaai-25/ |
Publication series
Name | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
Publisher | Association for the Advancement of Artificial Intelligence |
ISSN (Print) | 2159-5399 |
Conference
Conference | 39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) |
---|---|
Abbreviated title | AAAI-25 |
Country/Territory | United States |
City | Philadelphia |
Period | 25/02/25 → 4/03/25 |
Internet address |
Funding
This study is supported by National Natural Science Foundation of China (Grant No.62306242). Zhao extends gratitude to professor Chenjia for providing in-depth guidance on this work. Thanks are also extended to Chenyou and the other authors for their outstanding contributions to the research.