Explainable Machine Learning Frameworks for Recommendation and Prediction Based on User-generated Content

用於推薦和預測的基於用戶生成內容的可解釋機器學習框架

Student thesis: Doctoral Thesis

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Awarding Institution
Supervisors/Advisors
Award date17 Aug 2021

Abstract

In the era of big data, artificial intelligence-enabled (AI-enabled) information technology (IT) artifacts that utilize big data of user-generated content (UGC) have been constructed for various industries and public sectors. However, there is a lack of research in enhancing the interpretability of AI-enabled IT artifacts to increase users' understanding and trust towards the black-box IT artifacts. Thus, explainable artificial intelligence (XAI) models that provide intuitive explanations to the decisions of black-box machine learning IT artifacts need to be developed. In this dissertation, following design science theory, we construct XAI frameworks for recommendation and prediction based on UGC to improve the interpretability of machine-learning-based information systems and further enhance the effectiveness of business and government operations.

Given the massive UGC in the background of big data, this dissertation aims to incorporate XAI with UGC to support the decision process of users, companies, organizations, and governments. Generally, we strive to solve two research questions:

• How to develop IT artifacts based on XAI and UGC under different contexts, such as social media, electronic government (e-government), and transportation sector?

• How to incorporate multimedia and multi-task UGC into XAI frameworks to strengthen their accuracy and explainability?

The first study adds interpretability to video tag recommender systems. Extant video tag recommender systems are uninterpretable, which leads to distrust of the recommendation outcome, hesitation in IT adoption, and difficulty in the system debugging process. In order to make up for this deficiency, we construct an interpretable multimedia deep neural network (DNN) framework for video tag recommendation. The experimental results demonstrate that our interpretable video tag recommender system effectively explains its recommended tag to users and achieves a novel recommendation performance. The interpretability of the proposed recommender system could help enhance IT adoption, customer engagement, value co-creation, and precision marketing on the video-sharing platform. To the best of our knowledge, the proposed model is not only the first explainable video tag recommender system but also the first explainable multimedia tag recommender system.

The second research designs an explainable multi-task tag recommender system for electronic petition (e-petition) platforms. The complexity of e-petition tags causes citizens to make errors when tagging their e-petition, which may lead to the delay or failure of the e-petition. Tag recommender systems have been developed for social media and electronic commerce platforms. However, a design of tag recommender systems for e-petition platforms is missing. The purpose of this study is to develop an explainable multi-task deep learning tag recommender system to assist citizens in tagging their e-petitions. Our proposed framework is proven to be effective and interpretable through a series of quantitative and qualitative experiments. Being the first model for e-petition tag recommendation, the proposed framework could facilitate citizen participation on e-petition platforms and promote the development of e-government.

The third study enhances traffic accident severity prediction through XAI and multi-task learning. While previous research focuses mostly on predicting injury/death severity, the property loss severity, as well as the different levels of death severity and property loss severity, are ignored. Moreover, existing explanation methods for traffic accident severity prediction have deficiencies in accuracy and effectiveness. To attain comprehensive, accurate, and interpretable risk profiling, we develop an explainable multi-task learning framework for predicting injury, death, and property loss severity of traffic accidents. Experimental results show that the proposed model not only accurately predicts the three types of traffic accident severity, but also effectively explains its prediction outcomes. This study benefits the public by facilitating the design of traffic safety policies and promoting smart city development. The proposed framework is the first explainable multi-task model and the first deep-learning-based model for traffic accident severity prediction as far as we know.

The main contributions of my Ph.D. thesis are summarized as follows:
1. To fill the research gap in XAI-enabled data analytics, we develop interpretable and novel recommender systems and prediction models based on state-of-the-art machine learning techniques including DNNs, convolutional neural networks, and layer-wise relevance propagation.

2. To improve the model performance and interpretability, we incorporate multimedia and multi-task UGC into XAI frameworks based on the different forms and quantities of UGC data in the given specific contexts.

    Research areas

  • Explainable Artificial Intelligence, Machine Learning, Recommender System, Prediction Model, User-generated Content