Accelerating Communication-efficient Federated Multi-Task Learning With Personalization and Fairness

Renyou Xie, Chaojie Li*, Xiaojun Zhou, Zhaoyang Dong

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

6 Citations (Scopus)

Abstract

Federated learning techniques provide a promising framework for collaboratively training a machine learning model without sharing users' data, and delivering a security solution to guarantee privacy during the model training of IoT devices. Nonetheless, challenges posed by data heterogeneity and communication resource constraints make it difficult to develop an efficient federated learning algorithm in terms of the low order of convergence rate. It could significantly deteriorate the quality of service for critical machine learning tasks, e.g., facial recognition, which requires an edge-ready, low-power, low-latency training algorithm. To address these challenges, a communication-efficient federated learning approach is proposed in this paper where the momentum technique is leveraged to accelerate the convergence rate while largely reducing the communication requirements. Firstly, a federated multi-task learning framework by which the learning tasks are reformulated by the multi-objective optimization problem is introduced to address the data heterogeneity. The multiple gradient descent algorithm is harnessed to find the common gradient descending direction for all participants so that the common features can be learned and no sacrifice on each clients' performance. Secondly, to reduce communication costs, a local momentum technique with global information is developed to speed up the convergence rate, where the convergence analysis of the proposed method under non-convex case is studied. It is proved that the proposed local momentum can actually achieve the same acceleration as the global momentum, whereas it is more robust than algorithms that solely rely on the acceleration by the global momentum. Thirdly, the generalization of the proposed acceleration approach is investigated which is demonstrated by the accelerated variation of FedAvg. Finally, the performance of the proposed method on the learning model accuracy, convergence rate, and robustness to data heterogeneity, is investigated by empirical experiments on four public datasets, while a real-world IoT platform is constructed to demonstrate the communication efficiency of the proposed method. © 2024 IEEE.
Original languageEnglish
Pages (from-to)2239-2253
Number of pages15
JournalIEEE Transactions on Parallel and Distributed Systems
Volume35
Issue number11
Online published10 Jun 2024
DOIs
Publication statusPublished - Nov 2024
Externally publishedYes

Research Keywords

  • Communication efficiency
  • Convergence
  • Costs
  • Data heterogeneity
  • Data models
  • Federated learning
  • Internet of Things
  • Local momentum technique
  • Multi-task learning
  • Multitasking
  • Task analysis
  • Training

Fingerprint

Dive into the research topics of 'Accelerating Communication-efficient Federated Multi-Task Learning With Personalization and Fairness'. Together they form a unique fingerprint.

Cite this