FedLaw : Value-Aware Federated Learning With Individual Fairness and Coalition Stability

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations

Author(s)

  • Jianfeng Lu
  • Hangjian Zhang
  • Pan Zhou
  • Xiong Wang
  • Chen Wang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Emerging Topics in Computational Intelligence
Online published18 Sept 2024
Publication statusOnline published - 18 Sept 2024

Abstract

A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named FedLaw to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that FedLaw guarantees individual fairness, coalition stability, computational efficiency, collective rationality, redundancy, symmetry, additivity, strict desirability, and individual monotonicity, and also show that FedLaw can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of FedLaw in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that FedLaw is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness. © 2024 IEEE.

Research Area(s)

  • coalition stability, contribution evaluation, Federated learning (FL), individual fairness, model aggregation