Balancing Model Efficiency and Performance: Adaptive Pruner for Long-tailed Data

Zhe Zhao (Co-first Author), Haibin Wen (Co-first Author), Pengkun Wang*, Shuang Wang, Zhenkun Wang, Qingfu Zhang, Yang Wang*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

Long-tailed distribution datasets are prevalent in many machine learning tasks, yet existing neural network models still face significant challenges when handling such data. This paper proposes a novel adaptive pruning strategy, LTAP (Long-Tailed Adaptive Pruner), aimed at balancing model efficiency and performance to better address the challenges posed by long-tailed data distributions. LTAP introduces multi-dimensional importance scoring criteria and designs a dynamic weight adjustment mechanism to adaptively determine the pruning priority of parameters for different classes. By focusing on protecting parameters critical for tail classes, LTAP significantly enhances computational efficiency while maintaining model performance. This method combines the strengths of long-tailed learning and neural network pruning, overcoming the limitations of existing approaches in handling imbalanced data. Extensive experiments demonstrate that LTAP outperforms existing methods on various long-tailed datasets, achieving a good balance between model compression rate, computational efficiency, and classification accuracy. This research provides new insights into solving model optimization problems in long-tailed learning and is significant for improving the performance of neural networks on imbalanced datasets. The code is available at https://github.com/DataLab-atom/LT-VOTE. © 2025 by the author(s).
Original languageEnglish
Title of host publicationProceedings of the 42nd International Conference on Machine Learning
EditorsAarti Singh, Maryam Fazel, Daniel Hsu, Simon Lacoste-Julien, Felix Berkenkamp, Tegan Maharaj, Kiri Wagstaff, Jerry Zhu
PublisherML Research Press
Pages77723-77739
Number of pages17
Publication statusPublished - Jul 2025
Event42nd International Conference on Machine Learning (ICML 2025) - Vancouver Convention Center, Vancouver, Canada
Duration: 13 Jul 202519 Jul 2025
https://icml.cc/Conferences/2025

Publication series

NameProceedings of Machine Learning Research
Volume267
ISSN (Print)2640-3498

Conference

Conference42nd International Conference on Machine Learning (ICML 2025)
Abbreviated titleICML 2025
PlaceCanada
CityVancouver
Period13/07/2519/07/25
Internet address

Bibliographical note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).

Funding

The authors gratefully acknowledge the support from the National Natural Science Foundation of China (NSFC) under Grant Nos. 62402472, and 12227901. This work was also supported by the Natural Science Foundation of Jiangsu Province (No. BK20240461), the Key Basic Research Foundation of Shenzhen (No. JCYJ20220818100005011), the Research Grants Council of the Hong Kong Special Administrative Region (GRF Project No. CityU 11215723), the Project of Stable Support for Youth Team in Basic Research Field, CAS (No. YSBR-005), and the Academic Leaders Cultivation Program at USTC.

RGC Funding Information

  • RGC-funded

Fingerprint

Dive into the research topics of 'Balancing Model Efficiency and Performance: Adaptive Pruner for Long-tailed Data'. Together they form a unique fingerprint.

Cite this