FedLaw : Value-Aware Federated Learning With Individual Fairness and Coalition Stability
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Journal / Publication | IEEE Transactions on Emerging Topics in Computational Intelligence |
Online published | 18 Sept 2024 |
Publication status | Online published - 18 Sept 2024 |
Link(s)
Abstract
A long-standing problem remains with the heterogeneous clients in Federated Learning (FL), who often have diverse gains and requirements for the trained model, while their contributions are hard to evaluate due to the privacy-preserving training. Existing works mainly rely on single-dimension metric to calculate clients' contributions as aggregation weights, which however may damage the social fairness, thus discouraging the cooperation willingness of worse-off clients and causing the revenue instability. To tackle this issue, we propose a novel incentive mechanism named FedLaw to effectively evaluate clients' contributions and further assign aggregation weights. Specifically, we reuse the local model updates and model the contribution evaluation process as a convex coalition game among multiple players with a non-empty core. By deriving a closed-form expression of the Shapley value, we solve the game core in quadratic time. Moreover, we theoretically prove that FedLaw guarantees individual fairness, coalition stability, computational efficiency, collective rationality, redundancy, symmetry, additivity, strict desirability, and individual monotonicity, and also show that FedLaw can achieve a constant convergence bound. Extensive experiments on four real-world datasets validate the superiority of FedLaw in terms of model aggregation, fairness, and time overhead compared to the state-of-the-art five baselines. Experimental results show that FedLaw is able to reduce the computation time of contribution evaluation by about 12 times and improve the global model performance by about 2% while ensuring fairness. © 2024 IEEE.
Research Area(s)
- coalition stability, contribution evaluation, Federated learning (FL), individual fairness, model aggregation
Citation Format(s)
FedLaw: Value-Aware Federated Learning With Individual Fairness and Coalition Stability. / Lu, Jianfeng; Zhang, Hangjian; Zhou, Pan et al.
In: IEEE Transactions on Emerging Topics in Computational Intelligence, 18.09.2024.
In: IEEE Transactions on Emerging Topics in Computational Intelligence, 18.09.2024.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review