The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

View graph of relations

Author(s)

  • Ziquan Liu
  • Yufei Cui
  • Yan Yan
  • Yi Xu
  • Xiangyang Ji
  • Xue Liu

Related Research Unit(s)

Detail(s)

Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning
EditorsRuslan Salakhutdinov, Zico Kolter, Katherine Heller
PublisherML Research Press
Pages30908-30928
Publication statusPublished - Jul 2024

Publication series

NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Title41st International Conference on Machine Learning (ICML 2024)
LocationMesse Wien Exhibition Congress Center
PlaceAustria
CityVienna
Period21 - 27 July 2024

Abstract

In safety-critical applications such as medical imaging and autonomous driving, where decisions have profound implications for patient health and road safety, it is imperative to maintain both high adversarial robustness to protect against potential adversarial attacks and reliable uncertainty quantification in decision-making. With extensive research focused on enhancing adversarial robustness through various forms of adversarial training (AT), a notable knowledge gap remains concerning the uncertainty inherent in adversarially trained models. To address this gap, this study investigates the uncertainty of deep learning models by examining the performance of conformal prediction (CP) in the context of standard adversarial attacks within the adversarial defense community. It is first unveiled that existing CP methods do not produce informative prediction sets under the commonly used l-norm bounded attack if the model is not adversarially trained, which underpins the importance of adversarial training for CP. Our paper next demonstrates that the prediction set size (PSS) of CP using adversarially trained models with AT variants is often worse than using standard AT, inspiring us to research into CP-efficient AT for improved PSS. We propose to optimize a Beta-weighting loss with an entropy minimization regularizer during AT to improve CP-efficiency, where the Beta-weighting loss is shown to be an upper bound of PSS at the population level by our theoretical analysis. Moreover, our empirical study on four image classification datasets across three popular AT baselines validates the effectiveness of the proposed Uncertainty-Reducing AT (AT-UR). Copyright 2024 by the author(s)

Bibliographic Note

Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).

Citation Format(s)

The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks. / Liu, Ziquan; Cui, Yufei; Yan, Yan et al.
Proceedings of the 41st International Conference on Machine Learning. ed. / Ruslan Salakhutdinov; Zico Kolter; Katherine Heller. ML Research Press, 2024. p. 30908-30928 (Proceedings of Machine Learning Research; Vol. 235).

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review