The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings of the 41st International Conference on Machine Learning |
Editors | Ruslan Salakhutdinov, Zico Kolter, Katherine Heller |
Publisher | ML Research Press |
Pages | 30908-30928 |
Publication status | Published - Jul 2024 |
Publication series
Name | Proceedings of Machine Learning Research |
---|---|
Volume | 235 |
ISSN (Print) | 2640-3498 |
Conference
Title | 41st International Conference on Machine Learning (ICML 2024) |
---|---|
Location | Messe Wien Exhibition Congress Center |
Place | Austria |
City | Vienna |
Period | 21 - 27 July 2024 |
Link(s)
Document Link | Links
|
---|---|
Link to Scopus | https://www.scopus.com/record/display.uri?eid=2-s2.0-85203832282&origin=recordpage |
Permanent Link | https://scholars.cityu.edu.hk/en/publications/publication(1c5c2c49-f5df-403f-96ca-c4715e1730ba).html |
Abstract
In safety-critical applications such as medical imaging and autonomous driving, where decisions have profound implications for patient health and road safety, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks and reliable uncertainty quantification in decision-making. With extensive research focused on enhancing adversarial robustness through various forms of adversarial training (AT), a notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models. To address this gap, this study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. It is first unveiled that existing CP methods do not produce informative prediction sets under the commonly used l∞-norm bounded attack if the model is not adversarially trained, which underpins the importance of adversarial training for CP. Our paper next demonstrates that the prediction set size (PSS) of CP using adversarially trained models with AT variants is often worse than using standard AT, inspiring us to research into CP-efficient AT for improved PSS. We propose to optimize a Beta-weighting loss with an entropy minimization regularizer during AT to improve CP-efficiency, where the Beta-weighting loss is shown to be an upper bound of PSS at the population level by our theoretical analysis. Moreover, our empirical study on four image classification datasets across three popular AT baselines validates the effectiveness of the proposed Uncertainty-Reducing AT (AT-UR). Copyright 2024 by the author(s)
Bibliographic Note
Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).
Citation Format(s)
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks. / Liu, Ziquan; Cui, Yufei; Yan, Yan et al.
Proceedings of the 41st International Conference on Machine Learning. ed. / Ruslan Salakhutdinov; Zico Kolter; Katherine Heller. ML Research Press, 2024. p. 30908-30928 (Proceedings of Machine Learning Research; Vol. 235).
Proceedings of the 41st International Conference on Machine Learning. ed. / Ruslan Salakhutdinov; Zico Kolter; Katherine Heller. ML Research Press, 2024. p. 30908-30928 (Proceedings of Machine Learning Research; Vol. 235).
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review