Revisit the Power of Vanilla Knowledge Distillation: from Small Scale to Large Scale

Zhiwei Hao (Co-first Author), Jianyuan Guo (Co-first Author), Kai Han, Han Hu*, Chang Xu, Yunhe Wang*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

10 Citations (Scopus)

Abstract

The tremendous success of large models trained on extensive datasets demonstrates that scale is a key ingredient in achieving superior results. Therefore, the reflection on the rationality of designing knowledge distillation (KD) approaches for limited-capacity architectures solely based on small-scale datasets is now deemed imperative. In this paper, we identify the small data pitfall that presents in previous KD methods, which results in the underestimation of the power of vanilla KD framework on large-scale datasets such as ImageNet-1K. Specifically, we show that employing stronger data augmentation techniques and using larger datasets can directly decrease the gap between vanilla KD and other meticulously designed KD variants. This highlights the necessity of designing and evaluating KD approaches in the context of practical scenarios, casting off the limitations of small-scale datasets. Our investigation of the vanilla KD and its variants in more complex schemes, including stronger training strategies and different model capacities, demonstrates that vanilla KD is elegantly simple but astonishingly effective in large-scale scenarios. Without bells and whistles, we obtain state-of-the-art ResNet-50, ViT-S, and ConvNeXtV2-T models for ImageNet, which achieve 83.1%, 84.3%, and 85.0% top-1 accuracy, respectively. PyTorch code and checkpoints can be found at https://github.com/Hao840/vanillaKD.

Original languageEnglish
Title of host publicationAdvances in Neural Information Processing Systems 36
Subtitle of host publication37th Conference on Neural Information Processing Systems (NeurIPS 2023)
EditorsA. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, S. Levine
PublisherNeural Information Processing Systems (NeurIPS)
Number of pages14
ISBN (Electronic)9781713899921
Publication statusPublished - 2023
Externally publishedYes
Event37th Conference on Neural Information Processing Systems (NeurIPS 2023) - New Orleans Ernest N. Morial Convention Center, New Orleans, United States
Duration: 10 Dec 202316 Dec 2023
https://papers.nips.cc/paper_files/paper/2023
https://nips.cc/Conferences/2023

Publication series

NameAdvances in Neural Information Processing Systems
Volume36
ISSN (Print)1049-5258

Conference

Conference37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Abbreviated titleNIPS '23
PlaceUnited States
CityNew Orleans
Period10/12/2316/12/23
Internet address

Bibliographical note

Publisher Copyright:
© 2023 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Revisit the Power of Vanilla Knowledge Distillation: from Small Scale to Large Scale'. Together they form a unique fingerprint.

Cite this