Taming Disparity in Recommendation Algorithms: Explainability and Mitigation
DescriptionPersonalized recommender systems have become significantly important in modern society to reduce information/choice overload, satisfy the personalized demands of users, and increase the revenue of service/product providers. Despite being helpful and valuable, recommender systems are criticized for driving the disparity among different groups/individuals of users or items. This leads to severely negative consequences like gender bias and ethnic bias.Although recent studies have managed to analyze and mitigate the disparity in recommendation models, there are still several challenges: 1) lacking an explainable scheme to analyze the cause or source of disparity regarding input attributes/features; 2) lacking effective approaches to explain the disparity when the input attributes are correlated; 3) lacking general disparity mitigation solutions to address the accuracy-fairness dilemma.To tackle the aforementioned challenges, we will conduct systematic investigations on explaining and mitigating disparities for recommendation algorithms. Generally, our project consists of three major tasks. 1) We plan to measure the fairness/disparity with explainabilities. We will build an explainable scheme where Shapley values are incorporated to explain the disparity regarding input attributes. Due to the complexity of computing Shapley values, we will resort to two practical implementations to further speed up the computation. 2) We plan to explain the disparity when input attributes are not independent. Previous works assume the input attributes are independent, which is hardly realistic in the real world. To solve this problem, we will build a causal model with Shapley values where the feature dependency can be modeled by the causal graph. 3) We plan to mitigate the disparity in the recommendation algorithm. We will build a model-agnostic multi-objective framework and treat fairness as one of the objectives. Moreover, we will investigate the projected gradient method to make the Shapley value optimized in the gradient-based framework.With the support from industrial collaborators, our outcomes will establish a practical foundation for the analysis of disparities in recommender systems, provide effective accuracy-fairness balancing techniques, enhance the transparency and explainability of recommender systems, and push forward the scientific frontier of this research area.
|Effective start/end date||1/01/24 → …|