Abstract
Federated learning (FL) is a promising framework for learning from distributed data while maintaining privacy. The development of efficient FL algorithms encounters various challenges, including heterogeneous data and systems, limited communication capacities, and constrained local computational resources. Recently developed FedADMM methods show great resilience to both data and system heterogeneity. However, they still suffer from performance deterioration if the hyperparameters are not carefully tuned. To address this issue, we propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa. First, we design an inexactness criterion for the clients’ local updates to eliminate the need for empirically setting the local training accuracy. This inexactness criterion can be assessed by each client independently based on its unique condition, thereby reducing the local computational cost and mitigating the undesirable straggle effect. The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions. Additionally, we present a self-adaptive scheme that dynamically adjusts each client's penalty parameter, enhancing algorithm robustness by mitigating the need for empirical penalty parameter choices for each client. Extensive numerical experiments on both synthetic and real-world datasets have been conducted. As validated by some tests, our FedADMM-InSa algorithm improves model accuracy by 7.8% while reducing clients’ local workloads by 55.7% compared to benchmark algorithms.
© 2024 The Authors. Published by Elsevier Ltd.
Original language | English |
---|---|
Article number | 106772 |
Number of pages | 14 |
Journal | Neural Networks |
Volume | 181 |
Online published | 1 Oct 2024 |
DOIs | |
Publication status | Published - Jan 2025 |
Funding
Authors\u2019 names are listed in alphabetical order by family name to signify equal contributions. The authors are grateful to anonymous referees for their valuable comments which have helped improve the paper substantially. This work has been funded by the Alexander von Humboldt-Professorship program, the Humboldt Research Fellowship for postdoctoral researchers, the European Union's Horizon Europe MSCA project ModConFlex (grant number 101073558), the COST Action MAT-DYN-NET, the Transregio 154 Project of the DFG, grants PID2020-112617GB-C22 and TED2021-131390B-I00 of MINECO (Spain). Madrid Government - UAM Agreement for the Excellence of the University Research Staff in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).
Research Keywords
- ADMM
- Client heterogeneity
- Federated learning
- Inexactness criterion
Publisher's Copyright Statement
- This full text is made available under CC-BY-NC 4.0. https://creativecommons.org/licenses/by-nc/4.0/