Towards Efficient Training and Evaluation of Robust Models against l0 Bounded Adversarial Perturbations

Xuyang Zhong, Yixiao Huang, Chen Liu*

*Corresponding author for this work

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

Abstract

This work studies sparse adversarial perturbations bounded by $l_0$ norm. We propose a white-box PGD-like attack method named sparse-PGD to effectively and efficiently generate such perturbations. Furthermore, we combine sparse-PGD with a black-box attack to comprehensively and more reliably evaluate the models' robustness against $l_0$ bounded adversarial perturbations. Moreover, the efficiency of sparse-PGD enables us to conduct adversarial training to build robust models against sparse perturbations. Extensive experiments demonstrate that our proposed attack algorithm exhibits strong performance in different scenarios. More importantly, compared with other robust models, our adversarially trained model demonstrates state-of-the-art robustness against various sparse attacks. Codes are available at https://github.com/CityU-MLO/sPGD.

©  2024 by the author(s).
Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning
PublisherML Research Press
Pages61708-61726
Publication statusPublished - 2024
Event41st International Conference on Machine Learning (ICML 2024) - Messe Wien Exhibition Congress Center, Vienna, Austria
Duration: 21 Jul 202427 Jul 2024
https://proceedings.mlr.press/v235/
https://icml.cc/

Publication series

NameProceedings of Machine Learning Research
Volume235
ISSN (Print)2640-3498

Conference

Conference41st International Conference on Machine Learning (ICML 2024)
Country/TerritoryAustria
CityVienna
Period21/07/2427/07/24
Internet address

Funding

This work is supported by National Natural Science Foundation of China (NSFC Project No. 62306250), CityU APRC Project (Project No. 9610614), and CityU Seed Grant (Project No. 9229130).

Fingerprint

Dive into the research topics of 'Towards Efficient Training and Evaluation of Robust Models against l0 Bounded Adversarial Perturbations'. Together they form a unique fingerprint.

Cite this