Forward KL Regularized Preference Optimization for Aligning Diffusion Policies

Zhao Shan, Chenyou Fan, Shuang Qiu, Jiyuan Shi, Chenjia Bai*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Diffusion models have achieved remarkable success in sequential decision-making by leveraging the highly expressive model capabilities in policy learning. A central problem for learning diffusion policies is to align the policy output with human intents in various tasks. To achieve this, previous methods conduct return-conditioned policy generation or Reinforcement Learning (RL)-based policy optimization, while they both rely on pre-defined reward functions. In this work, we propose a novel framework, Forward KL regularized Preference optimization for aligning Diffusion policies, to align the diffusion policy with preferences directly. We first train a diffusion policy from the offline dataset without considering the preference, and then align the policy to the preference data via direct preference optimization. During the alignment phase, we formulate direct preference learning in a diffusion policy, where the forward KL regularization is employed in preference optimization to avoid generating out-of-distribution actions. We conduct extensive experiments for MetaWorld manipulation and D4RL tasks. The results show our method exhibits superior alignment with preferences and outperforms previous state-of-the-art algorithms. Copyright © 2025, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Original languageEnglish
Title of host publicationProceedings of the 39th Annual AAAI Conference on Artificial Intelligence
EditorsToby Walsh, Julie Shah, Zico Kolter
PublisherAAAI Press
Pages14386-14395
Volume39
ISBN (Electronic)1-57735-897-X, 978-1-57735-897-8
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025) - Pennsylvania Convention Center , Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025
https://aaai.org/conference/aaai/aaai-25/

Publication series

NameProceedings of the AAAI Conference on Artificial Intelligence
PublisherAssociation for the Advancement of Artificial Intelligence
ISSN (Print)2159-5399

Conference

Conference39th Annual AAAI Conference on Artificial Intelligence (AAAI 2025)
Abbreviated titleAAAI-25
Country/TerritoryUnited States
CityPhiladelphia
Period25/02/254/03/25
Internet address

Funding

This study is supported by National Natural Science Foundation of China (Grant No.62306242). Zhao extends gratitude to professor Chenjia for providing in-depth guidance on this work. Thanks are also extended to Chenyou and the other authors for their outstanding contributions to the research.

Fingerprint

Dive into the research topics of 'Forward KL Regularized Preference Optimization for Aligning Diffusion Policies'. Together they form a unique fingerprint.

Cite this