TY - JOUR
T1 - Recommender Systems in the Era of Large Language Models (LLMs)
AU - Zhao, Zihuai
AU - Fan, Wenqi
AU - Li, Jiatong
AU - Liu, Yunqing
AU - Mei, Xiaowei
AU - Wang, Yiqi
AU - Wen, Zhen
AU - Wang, Fei
AU - Zhao, Xiangyu
AU - Tang, Jiliang
AU - Li, Qing
PY - 2024/11
Y1 - 2024/11
N2 - With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an indispensable and important component in our daily lives, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have achieved significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating their textual side information, these DNN-based methods still exhibit some limitations, such as difficulties in effectively understanding users' interests and capturing textual side information, inabilities in generalizing to various seen/unseen recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the development of Large Language Models (LLMs), such as ChatGPT and GPT-4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization capabilities and reasoning skills. As a result, recent studies have actively attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Therefore, in this survey, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including pre-training, fine-tuning, and prompting paradigms. More specifically, we first introduce the representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we systematically review the emerging advanced techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss the promising future directions in this emerging field. © 2024 IEEE.
AB - With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an indispensable and important component in our daily lives, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have achieved significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating their textual side information, these DNN-based methods still exhibit some limitations, such as difficulties in effectively understanding users' interests and capturing textual side information, inabilities in generalizing to various seen/unseen recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the development of Large Language Models (LLMs), such as ChatGPT and GPT-4, has revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI), due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization capabilities and reasoning skills. As a result, recent studies have actively attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Therefore, in this survey, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including pre-training, fine-tuning, and prompting paradigms. More specifically, we first introduce the representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we systematically review the emerging advanced techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss the promising future directions in this emerging field. © 2024 IEEE.
KW - Electronic mail
KW - History
KW - In-context Learning
KW - Large Language Models (LLMs)
KW - Motion pictures
KW - Pre-training and Fine-tuning
KW - Prompting
KW - Recommender Systems
KW - Reviews
KW - Surveys
KW - Task analysis
UR - http://www.scopus.com/inward/record.url?scp=85191306545&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85191306545&origin=recordpage
U2 - 10.1109/TKDE.2024.3392335
DO - 10.1109/TKDE.2024.3392335
M3 - RGC 21 - Publication in refereed journal
SN - 1041-4347
VL - 36
SP - 6889
EP - 6907
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 11
ER -