Improved Fine-Tuning by Better Leveraging Pre-Training Data
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | 36th Conference on Neural Information Processing Systems (NeurIPS 2022) |
Editors | S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, A. Oh |
Publisher | Neural Information Processing Systems Foundation |
ISBN (print) | 9781713871088 |
Publication status | Published - Nov 2022 |
Publication series
Name | Advances in Neural Information Processing Systems |
---|---|
Volume | 35 |
ISSN (Print) | 1049-5258 |
Conference
Title | 36th Conference on Neural Information Processing Systems (NeurIPS 2022) |
---|---|
Location | Hybrid, New Orleans Convention Center |
Place | United States |
City | New Orleans |
Period | 28 November - 9 December 2022 |
Link(s)
Document Link | Links |
---|---|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85151083863&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(28db7b45-a719-465a-ae02-4da985cd89f4).html |
Abstract
As a dominant paradigm, fine-tuning a pre-trained model on the target data is widely used in many deep learning applications, especially for small data sets. However, recent studies have empirically shown that training from scratch has the final performance that is no worse than this pre-training strategy once the number of training samples is increased in some vision tasks. In this work, we revisit this phenomenon from the perspective of generalization analysis by using excess risk bound which is popular in learning theory. The result reveals that the excess risk bound may have a weak dependency on the pre-trained model. The observation inspires us to leverage pre-training data for fine-tuning, since this data is also available for fine-tuning. The generalization result of using pre-training data shows that the excess risk bound on a target task can be improved when the appropriate pre-training data is included in fine-tuning. With the theoretical motivation, we propose a novel selection strategy to select a subset from pre-training data to help improve the generalization on the target task. Extensive experimental results for image classification tasks on 8 benchmark data sets verify the effectiveness of the proposed data selection based fine-tuning pipeline. Our code is available at https://github.com/ziquanliu/NeurIPS2022_UOT_fine_tuning. © F2022 Neural n Neural Information Processing Systems Foundation. All rights reserved.
Citation Format(s)
Improved Fine-Tuning by Better Leveraging Pre-Training Data. / Liu, Ziquan; Xu, Yi; Xu, Yuanhong et al.
36th Conference on Neural Information Processing Systems (NeurIPS 2022). ed. / S. Koyejo; S. Mohamed; A. Agarwal; D. Belgrave; K. Cho; A. Oh. Neural Information Processing Systems Foundation, 2022. (Advances in Neural Information Processing Systems; Vol. 35).
36th Conference on Neural Information Processing Systems (NeurIPS 2022). ed. / S. Koyejo; S. Mohamed; A. Agarwal; D. Belgrave; K. Cho; A. Oh. Neural Information Processing Systems Foundation, 2022. (Advances in Neural Information Processing Systems; Vol. 35).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review