Distributed Pruning Towards Tiny Neural Networks in Federated Learning

Hong Huang, Lan Zhang, Chaoyue Sun, Ruogu Fang, Xiaoyong Yuan, Dapeng Wu

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

18 Citations (Scopus)

Abstract

Neural network pruning is an essential technique for reducing the size and complexity of deep neural networks, enabling large-scale models on devices with limited resources. However, existing pruning approaches heavily rely on training data for guiding the pruning strategies, making them ineffective for federated learning over distributed and confidential datasets. Additionally, the memory-and computation-intensive pruning process becomes infeasible for recourse-constrained devices in federated learning. To address these challenges, we propose FedTiny, a distributed pruning framework for federated learning that generates specialized tiny models for memory-And computing-constrained devices. We introduce two key modules in FedTiny to adaptively search coarse-and finer-pruned specialized models to fit deployment scenarios with sparse and cheap local computation. First, an adaptive batch normalization selection module is designed to mitigate biases in pruning caused by the heterogeneity of local data. Second, a lightweight progressive pruning module aims to finer prune the models under strict memory and computational budgets, allowing the pruning policy for each layer to be gradually determined rather than evaluating the overall model structure. The experimental results demonstrate the effectiveness of FedTiny, which outperforms state-of-The-Art approaches, particularly when compressing deep models to extremely sparse tiny models. FedTiny achieves an accuracy improvement of 2.61% while significantly reducing the computational cost by 95.91% and the memory footprint by 94.01% compared to state-of-The-Art methods. © 2023 IEEE.
Original languageEnglish
Title of host publicationProceedings - 2023 IEEE 43rd International Conference on Distributed Computing Systems, ICDCS 2023
PublisherIEEE
Pages190-201
ISBN (Electronic)979-8-3503-3986-4
ISBN (Print)979-8-3503-3987-1
DOIs
Publication statusPublished - 2023
Event43rd IEEE International Conference on Distributed Computing Systems (ICDCS 2023) - Sheraton Hong Kong & Towers, Hong Kong, China
Duration: 18 Jul 202321 Jul 2023
https://icdcs2023.icdcs.org/
https://ieeexplore.ieee.org/xpl/conhome/1000213/all-proceedings

Publication series

NameProceedings - International Conference on Distributed Computing Systems
ISSN (Print)1063-6927
ISSN (Electronic)2575-8411

Conference

Conference43rd IEEE International Conference on Distributed Computing Systems (ICDCS 2023)
PlaceHong Kong, China
Period18/07/2321/07/23
Internet address

Bibliographical note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).

Research Keywords

  • federated learning
  • neural network pruning
  • tiny neural networks

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Distributed Pruning Towards Tiny Neural Networks in Federated Learning'. Together they form a unique fingerprint.

Cite this