Analysis of regularized federated learning

Langming Liu, Ding-Xuan Zhou*

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

5 Citations (Scopus)
19 Downloads (CityUHK Scholars)

Abstract

Federated learning is an efficient machine learning tool for dealing with heterogeneous big data and privacy protection. Federated learning methods with regularization can control the level of communications between the central and local machines. Stochastic gradient descent is often used for implementing such methods on heterogeneous big data, to reduce the communication costs. In this paper, we consider such an algorithm called Loopless Local Gradient Descent which has advantages in reducing the expected communications by controlling a probability level. We improve the method by allowing flexible step sizes and carry out novel analysis for the convergence of the algorithm in a non-convex setting in addition to the standard strongly convex setting. In the non-convex setting, we derive rates of convergence when the smooth objective function satisfies a Polyak-Łojasiewicz condition. When the objective function is strongly convex, a sufficient and necessary condition for the convergence in expectation is presented. © 2024 The Authors
Original languageEnglish
Article number128579
JournalNeurocomputing
Volume611
Online published24 Sept 2024
DOIs
Publication statusPublished - 1 Jan 2025

Research Keywords

  • Convergence
  • Federated learning
  • Regularization
  • Step size sequence
  • Stochastic gradient descent

Publisher's Copyright Statement

  • This full text is made available under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/

Fingerprint

Dive into the research topics of 'Analysis of regularized federated learning'. Together they form a unique fingerprint.

Cite this