Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

1 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings of 41st International Conference on Machine Learning
EditorsRuslan Salakhutdinov, Zico Kolter, Katherine Heller, Adrian Weller, Nuria Oliver, Jonathan Scarlett, Felix Berkenkamp
Pages4642--4695
Number of pages54
Publication statusPublished - Jul 2024

Publication series

NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Title41st International Conference on Machine Learning (ICML 2024)
LocationMesse Wien Exhibition Congress Center
PlaceAustria
CityVienna
Period21 - 27 July 2024

Abstract

Neural Network-based active learning (NAL) is a cost-effective data selection technique that utilizes neural networks to select and train on a small subset of samples. While existing work successfully develops various effective or theory-justified NAL algorithms, the understanding of the two commonly used query criteria of NAL: uncertainty-based and diversity-based, remains in its infancy. In this work, we try to move one step forward by offering a unified explanation for the success of both query criteria-based NAL from a feature learning view. Specifically, we consider a feature-noise data model comprising easy-to-learn or hard-to-learn features disrupted by noise, and conduct analysis over 2-layer NN-based NALs in the pool-based scenario. We provably show that both uncertainty-based and diversity-based NAL are inherently amenable to one and the same principle, i.e., striving to prioritize samples that contain yet-to-be-learned features. We further prove that this shared principle is the key to their success-achieve small test error within a small labeled set. Contrastingly, the strategy-free passive learning exhibits a large test error due to the inadequate learning of yet-to-be-learned features, necessitating resort to a significantly larger label complexity for a sufficient test error reduction. Experimental results validate our findings. © 2024 by the author(s).

Citation Format(s)

Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples. / Bu, Dake; Huang, Wei; Suzuki, Taiji et al.
Proceedings of 41st International Conference on Machine Learning. ed. / Ruslan Salakhutdinov; Zico Kolter; Katherine Heller; Adrian Weller; Nuria Oliver; Jonathan Scarlett; Felix Berkenkamp. 2024. p. 4642--4695 (Proceedings of Machine Learning Research; Vol. 235).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review