Fusing Reward and Dueling Feedback in Stochastic Bandits

Xuchuang Wang, Qirun Zeng, Jinhang Zuo, Xutong Liu, Mohammad Hajiesmaili, John C. S. Lui, Adam Wierman

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

This paper investigates the fusion of absolute (reward) and relative (dueling) feedback in stochastic bandits, where both feedback types are gathered in each decision round. We derive a regret lower bound, demonstrating that an efficient algorithm may incur only the smaller among the reward and dueling-based regret for each individual arm. We propose two fusion approaches: (1) a simple elimination fusion algorithm that leverages both feedback types to explore all arms and unifies collected information by sharing a common candidate arm set, and (2) a decomposition fusion algorithm that selects the more effective feedback to explore the corresponding arms and randomly assigns one feedback type for exploration and the other for exploitation in each round. The elimination fusion experiences a suboptimal multiplicative term of the number of arms in regret due to the intrinsic suboptimality of dueling elimination. In contrast, the decomposition fusion achieves regret matching the lower bound up to a constant under a common assumption. Extensive experiments confirm the efficacy of our algorithms and theoretical results.

Original languageEnglish
Title of host publicationICML'25: Proceedings of the 42nd International Conference on Machine Learning
PublisherJMLR.org
Publication statusPresented - 15 Jul 2025
Event42nd International Conference on Machine Learning, ICML 2025 - Vancouver Convention Center, Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025
https://icml.cc/Conferences/2025

Conference

Conference42nd International Conference on Machine Learning, ICML 2025
Abbreviated titleICML 2025
PlaceCanada
CityVancouver
Period13/07/2519/07/25
Internet address

Funding

The work of Jinhang Zuo was supported by CityUHK 9610706. The work of Mohammad Hajiesmaili was CAREER-2045641, CPS-2136199, and CNS-2325956. The work of John C.S. Lui was supported in part by the RGC SRFS2122-4S02. The work of Adam Wierman was supported by NSF grants CCF-2326609, CNS-2146814, CPS2136197, CNS-2106403, and NGSDI-2105648, as well as funding from the Resnick Sustainability Institute. Xutong Liu is the corresponding author.

Fingerprint

Dive into the research topics of 'Fusing Reward and Dueling Feedback in Stochastic Bandits'. Together they form a unique fingerprint.

Cite this