Abstract
Neural network pruning is an essential technique for reducing the size and complexity of deep neural networks, enabling large-scale models on devices with limited resources. However, existing pruning approaches heavily rely on training data for guiding the pruning strategies, making them ineffective for federated learning over distributed and confidential datasets. Additionally, the memory-and computation-intensive pruning process becomes infeasible for recourse-constrained devices in federated learning. To address these challenges, we propose FedTiny, a distributed pruning framework for federated learning that generates specialized tiny models for memory-And computing-constrained devices. We introduce two key modules in FedTiny to adaptively search coarse-and finer-pruned specialized models to fit deployment scenarios with sparse and cheap local computation. First, an adaptive batch normalization selection module is designed to mitigate biases in pruning caused by the heterogeneity of local data. Second, a lightweight progressive pruning module aims to finer prune the models under strict memory and computational budgets, allowing the pruning policy for each layer to be gradually determined rather than evaluating the overall model structure. The experimental results demonstrate the effectiveness of FedTiny, which outperforms state-of-The-Art approaches, particularly when compressing deep models to extremely sparse tiny models. FedTiny achieves an accuracy improvement of 2.61% while significantly reducing the computational cost by 95.91% and the memory footprint by 94.01% compared to state-of-The-Art methods. © 2023 IEEE.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2023 IEEE 43rd International Conference on Distributed Computing Systems, ICDCS 2023 |
| Publisher | IEEE |
| Pages | 190-201 |
| ISBN (Electronic) | 979-8-3503-3986-4 |
| ISBN (Print) | 979-8-3503-3987-1 |
| DOIs | |
| Publication status | Published - 2023 |
| Event | 43rd IEEE International Conference on Distributed Computing Systems (ICDCS 2023) - Sheraton Hong Kong & Towers, Hong Kong, China Duration: 18 Jul 2023 → 21 Jul 2023 https://icdcs2023.icdcs.org/ https://ieeexplore.ieee.org/xpl/conhome/1000213/all-proceedings |
Publication series
| Name | Proceedings - International Conference on Distributed Computing Systems |
|---|---|
| ISSN (Print) | 1063-6927 |
| ISSN (Electronic) | 2575-8411 |
Conference
| Conference | 43rd IEEE International Conference on Distributed Computing Systems (ICDCS 2023) |
|---|---|
| Place | Hong Kong, China |
| Period | 18/07/23 → 21/07/23 |
| Internet address |
Bibliographical note
Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).Research Keywords
- federated learning
- neural network pruning
- tiny neural networks
RGC Funding Information
- RGC-funded