Skip to main navigation Skip to search Skip to main content

Optimal Power Control for Over-the-Air Federated Learning with Gradient Compression

Mengzhe Ruan, Yunhe Li, Weizhou Zhang, Linqi Song, Weitao Xu*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Federated Learning (FL) has emerged as a transformative approach in distributed machine learning, enabling the collaborative training of models using decentralized datasets from diverse sources such as mobile edge devices. This paradigm not only enhances data privacy but also significantly reduces the communication burden typically associated with centralized data aggregation. In wireless networks, Over-the-air Federated Learning (OTA-FL) has been developed as a communication-efficient solution, allowing for the simultaneous transmission and aggregation of model updates from numerous edge devices across the available bandwidth. Gradient compression techniques are necessary to further enhance the communication efficiency of FL, particularly in bandwidth-constrained wireless environments. Despite these advancements, OTA-FL with gradient compression encounters substantial challenges, including learning performance degradation due to compression errors, non-uniform channel fading, and noise interference. Existing power control strategies have yet to fully address these issues, leaving a significant gap in optimizing OTA-FL performance under gradient compression. This paper introduces a novel power control strategy that coordinately integrates gradient compression to optimize OTAFL performance by minimizing the impact of channel fading and noise. Our approach employs linear approximations to complex terms, ensuring the stability and effectiveness of each gradient descent iteration. Numerical results demonstrate that our strategy significantly enhances convergence rates compared to traditional methods like channel inversion and uniform power transmission. This research advances the OTA-FL field and opens new avenues for performance tuning in communication-efficient federated learning systems. © 2024 IEEE.
Original languageEnglish
Title of host publicationProceedings - 2024 IEEE 30th International Conference on Parallel and Distributed Systems, ICPADS 2024
PublisherIEEE
Pages326-333
ISBN (Electronic)979-8-3315-1596-6
ISBN (Print)979-8-3315-1597-3
DOIs
Publication statusPublished - 2024
Event30th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2024) - Hotel Moscow, Belgrade, Serbia
Duration: 10 Oct 202414 Oct 2024
https://attend.ieee.org/icpads/

Publication series

NameProceedings of the International Conference on Parallel and Distributed Systems - ICPADS
ISSN (Print)1521-9097
ISSN (Electronic)2690-5965

Conference

Conference30th IEEE International Conference on Parallel and Distributed Systems (ICPADS 2024)
Abbreviated titleICPADS2024
PlaceSerbia
CityBelgrade
Period10/10/2414/10/24
Internet address

Funding

The work was supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 21201420 and CityU 11201422), the Innovation and Technology Commission of Hong Kong (Project No. PRP/037/23FX and MHP/072/23), NSF of Shandong Province (Project No. ZR2021LZH010), and NSF of Guangdong Province (Project No. 2414050001974). This work was supported in part by the Research Grants Council of the Hong Kong SAR under Grant GRF 11217823 and Collaborative Research Fund C1042-23GF, the National Natural Science Foundation of China under Grant 62371411, InnoHK initiative, the Government of the HKSAR, Laboratory for AIPowered Financial Technologies.

Research Keywords

  • Federated Learning
  • power control
  • Over-the Air Computation
  • Gradient Compression

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Optimal Power Control for Over-the-Air Federated Learning with Gradient Compression'. Together they form a unique fingerprint.

Cite this