FedADMM-InSa: An inexact and self-adaptive ADMM for federated learning

Yongcun Song, Ziqi Wang*, Enrique Zuazua

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

1 Citation (Scopus)
42 Downloads (CityUHK Scholars)

Abstract

Federated learning (FL) is a promising framework for learning from distributed data while maintaining privacy. The development of efficient FL algorithms encounters various challenges, including heterogeneous data and systems, limited communication capacities, and constrained local computational resources. Recently developed FedADMM methods show great resilience to both data and system heterogeneity. However, they still suffer from performance deterioration if the hyperparameters are not carefully tuned. To address this issue, we propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa. First, we design an inexactness criterion for the clients’ local updates to eliminate the need for empirically setting the local training accuracy. This inexactness criterion can be assessed by each client independently based on its unique condition, thereby reducing the local computational cost and mitigating the undesirable straggle effect. The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions. Additionally, we present a self-adaptive scheme that dynamically adjusts each client's penalty parameter, enhancing algorithm robustness by mitigating the need for empirical penalty parameter choices for each client. Extensive numerical experiments on both synthetic and real-world datasets have been conducted. As validated by some tests, our FedADMM-InSa algorithm improves model accuracy by 7.8% while reducing clients’ local workloads by 55.7% compared to benchmark algorithms.

© 2024 The Authors. Published by Elsevier Ltd.

Original languageEnglish
Article number106772
Number of pages14
JournalNeural Networks
Volume181
Online published1 Oct 2024
DOIs
Publication statusPublished - Jan 2025

Funding

Authors\u2019 names are listed in alphabetical order by family name to signify equal contributions. The authors are grateful to anonymous referees for their valuable comments which have helped improve the paper substantially. This work has been funded by the Alexander von Humboldt-Professorship program, the Humboldt Research Fellowship for postdoctoral researchers, the European Union's Horizon Europe MSCA project ModConFlex (grant number 101073558), the COST Action MAT-DYN-NET, the Transregio 154 Project of the DFG, grants PID2020-112617GB-C22 and TED2021-131390B-I00 of MINECO (Spain). Madrid Government - UAM Agreement for the Excellence of the University Research Staff in the context of the V PRICIT (Regional Programme of Research and Technological Innovation).

Research Keywords

  • ADMM
  • Client heterogeneity
  • Federated learning
  • Inexactness criterion

Publisher's Copyright Statement

  • This full text is made available under CC-BY-NC 4.0. https://creativecommons.org/licenses/by-nc/4.0/

Fingerprint

Dive into the research topics of 'FedADMM-InSa: An inexact and self-adaptive ADMM for federated learning'. Together they form a unique fingerprint.

Cite this