TY - GEN
T1 - Neuron Activation Coverage
T2 - 12th International Conference on Learning Representations (ICLR 2024)
AU - Liu, Yibing
AU - Tian, Chris Xing
AU - Li, Haoliang
AU - Ma, Lei
AU - Wang, Shiqi
N1 - Research Unit(s) information for this publication is provided by the author(s) concerned.
PY - 2024/5
Y1 - 2024/5
N2 - The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i.e., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both the neuron output and its influence on model decisions. Then, to characterize the relationship between neurons and OOD issues, we introduce the neuron activation coverage (NAC) - a simple measure for neuron behaviors under InD data. Leveraging our NAC, we show that 1) InD and OOD inputs can be largely separated based on the neuron behavior, which significantly eases the OOD detection problem and beats the 21 previous methods over three benchmarks (CIFAR-10, CIFAR-100, and ImageNet-1K). 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which enables a NAC-based criterion for evaluating model robustness. Compared to prevalent InD validation criteria, we show that NAC not only can select more robust models, but also has a stronger correlation with OOD test performance. Our code is available at: https://github.com/BierOne/ood_coverage. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.
AB - The out-of-distribution (OOD) problem generally arises when neural networks encounter data that significantly deviates from the training data distribution, i.e., in-distribution (InD). In this paper, we study the OOD problem from a neuron activation view. We first formulate neuron activation states by considering both the neuron output and its influence on model decisions. Then, to characterize the relationship between neurons and OOD issues, we introduce the neuron activation coverage (NAC) - a simple measure for neuron behaviors under InD data. Leveraging our NAC, we show that 1) InD and OOD inputs can be largely separated based on the neuron behavior, which significantly eases the OOD detection problem and beats the 21 previous methods over three benchmarks (CIFAR-10, CIFAR-100, and ImageNet-1K). 2) a positive correlation between NAC and model generalization ability consistently holds across architectures and datasets, which enables a NAC-based criterion for evaluating model robustness. Compared to prevalent InD validation criteria, we show that NAC not only can select more robust models, but also has a stronger correlation with OOD test performance. Our code is available at: https://github.com/BierOne/ood_coverage. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.
UR - http://www.scopus.com/inward/record.url?scp=85192602137&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85192602137&origin=recordpage
M3 - RGC 32 - Refereed conference paper (with host publication)
T3 - 12th International Conference on Learning Representations, ICLR 2024
BT - The Twelfth International Conference on Learning Representations
PB - International Conference on Learning Representations, ICLR
Y2 - 7 May 2024 through 11 May 2024
ER -