Clustering Based Online Learning in Recommender Systems: A Bandit Approach

Linqi Song, Cem Tekin, Mihaela van der Schaar

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

11 Citations (Scopus)

Abstract

A big challenge for the design and implementation of large-scale online services is determining what items to recommend to their users. For instance, Netflix makes movie recommendations; Amazon makes product recommendations; and Yahoo! makes webpage recommendations. In these systems, items are recommended based on the characteristics and circumstances of the users, which are provided to the recommender as contexts (e.g., search history, time, and location). The task of building an efficient recommender system is challenging due to the fact that both the item space and the context space are very large. Existing works either focus on a large item space without contexts, large context space with small number of items, or they jointly consider the space of items and contexts together to solve the online recommendation problem. In contrast, we develop an algorithm that does exploration and exploitation in the context space and the item space separately, and develop an algorithm that combines clustering of the items with information aggregation in the context space. Basically, given a user's context, our algorithm aggregates its past history over a ball centered on the user's context, whose radius decreases at a rate that allows sufficiently accurate estimates of the payoffs such that the recommended payoffs converge to the true (unknown) payoffs. Theoretical results show that our algorithm can achieve a sublinear learning regret in time, namely the payoff difference of the oracle optimal benchmark, where the preferences of users on certain items in certain context are known, and our algorithm, where the information is incomplete. Numerical results show that our algorithm significantly outperforms (over 48%) the existing algorithms in terms of regret.
Original languageEnglish
Title of host publication2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
PublisherIEEE
Pages4528-4532
ISBN (Print)9781479928927, 9781479928934
DOIs
Publication statusPublished - May 2014
Externally publishedYes
Event2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014) - Florence, Italy
Duration: 4 May 20149 May 2014

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Conference

Conference2014 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2014)
PlaceItaly
CityFlorence
Period4/05/149/05/14

Research Keywords

  • clustering algorithms
  • multi-armed bandit
  • online learning
  • Recommender systems

Fingerprint

Dive into the research topics of 'Clustering Based Online Learning in Recommender Systems: A Bandit Approach'. Together they form a unique fingerprint.

Cite this