Clustering Based Online Learning in Recommender Systems : A Bandit Approach
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) |
Publisher | Institute of Electrical and Electronics Engineers, Inc. |
Pages | 4528-4532 |
ISBN (print) | 9781479928927, 9781479928934 |
Publication status | Published - May 2014 |
Externally published | Yes |
Publication series
Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
---|---|
ISSN (Print) | 1520-6149 |
Conference
Title | 2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014) |
---|---|
Place | Italy |
City | Florence |
Period | 4 - 9 May 2014 |
Link(s)
Abstract
A big challenge for the design and implementation of large-scale online services is determining what items to recommend to their users. For instance, Netflix makes movie recommendations; Amazon makes product recommendations; and Yahoo! makes webpage recommendations. In these systems, items are recommended based on the characteristics and circumstances of the users, which are provided to the recommender as contexts (e.g., search history, time, and location). The task of building an efficient recommender system is challenging due to the fact that both the item space and the context space are very large. Existing works either focus on a large item space without contexts, large context space with small number of items, or they jointly consider the space of items and contexts together to solve the online recommendation problem. In contrast, we develop an algorithm that does exploration and exploitation in the context space and the item space separately, and develop an algorithm that combines clustering of the items with information aggregation in the context space. Basically, given a user's context, our algorithm aggregates its past history over a ball centered on the user's context, whose radius decreases at a rate that allows sufficiently accurate estimates of the payoffs such that the recommended payoffs converge to the true (unknown) payoffs. Theoretical results show that our algorithm can achieve a sublinear learning regret in time, namely the payoff difference of the oracle optimal benchmark, where the preferences of users on certain items in certain context are known, and our algorithm, where the information is incomplete. Numerical results show that our algorithm significantly outperforms (over 48%) the existing algorithms in terms of regret.
Research Area(s)
- clustering algorithms, multi-armed bandit, online learning, Recommender systems
Citation Format(s)
Clustering Based Online Learning in Recommender Systems: A Bandit Approach. / Song, Linqi; Tekin, Cem; van der Schaar, Mihaela.
2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Institute of Electrical and Electronics Engineers, Inc., 2014. p. 4528-4532 6854459 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Institute of Electrical and Electronics Engineers, Inc., 2014. p. 4528-4532 6854459 (ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review