TY - GEN
T1 - Hierarchical Item Inconsistency Signal Learning for Sequence Denoising in Sequential Recommendation
AU - Zhang, Chi
AU - Du, Yantong
AU - Zhao, Xiangyu
AU - Han, Qilong
AU - Chen, Rui
AU - Li, Li
PY - 2022
Y1 - 2022
N2 - Sequential recommender systems aim to recommend the next items in which target users are most interested based on their historical interaction sequences. In practice, historical sequences typically contain some inherent noise (e.g., accidental interactions), which is harmful to learn accurate sequence representations and thus misleads the next-item recommendation. However, the absence of supervised signals (i.e., labels indicating noisy items) makes the problem of sequence denoising rather challenging. To this end, we propose a novel sequence denoising paradigm for sequential recommendation by learning hierarchical item inconsistency signals. More specifically, we design a hierarchical sequence denoising (HSD) model, which first learns two levels of inconsistency signals in input sequences, and then generates noiseless subsequences (i.e., dropping inherent noisy items) for subsequent sequential recommenders. It is noteworthy that HSD is flexible to accommodate supervised item signals, if any, and can be seamlessly integrated with most existing sequential recommendation models to boost their performance. Extensive experiments on five public benchmark datasets demonstrate the superiority of HSD over state-of-the-art denoising methods and its applicability over a wide variety of mainstream sequential recommendation models. The implementation code is available at https://github.com/zc-97/HSD.
AB - Sequential recommender systems aim to recommend the next items in which target users are most interested based on their historical interaction sequences. In practice, historical sequences typically contain some inherent noise (e.g., accidental interactions), which is harmful to learn accurate sequence representations and thus misleads the next-item recommendation. However, the absence of supervised signals (i.e., labels indicating noisy items) makes the problem of sequence denoising rather challenging. To this end, we propose a novel sequence denoising paradigm for sequential recommendation by learning hierarchical item inconsistency signals. More specifically, we design a hierarchical sequence denoising (HSD) model, which first learns two levels of inconsistency signals in input sequences, and then generates noiseless subsequences (i.e., dropping inherent noisy items) for subsequent sequential recommenders. It is noteworthy that HSD is flexible to accommodate supervised item signals, if any, and can be seamlessly integrated with most existing sequential recommendation models to boost their performance. Extensive experiments on five public benchmark datasets demonstrate the superiority of HSD over state-of-the-art denoising methods and its applicability over a wide variety of mainstream sequential recommendation models. The implementation code is available at https://github.com/zc-97/HSD.
KW - contrastive learning
KW - curriculum learning
KW - sequence denoising
KW - sequential recommendation
UR - http://www.scopus.com/inward/record.url?scp=85140847029&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85140847029&origin=recordpage
U2 - 10.1145/3511808.3557348
DO - 10.1145/3511808.3557348
M3 - RGC 32 - Refereed conference paper (with host publication)
SN - 9781450392365
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 2508
EP - 2518
BT - CIKM '22 - Proceedings of the 31st ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery
CY - New York
T2 - 31st ACM International Conference on Information and Knowledge Management (CIKM 2022)
Y2 - 17 October 2022 through 21 October 2022
ER -