A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Article number | 172 |
Journal / Publication | ACM Transactions on Multimedia Computing, Communications and Applications |
Volume | 19 |
Issue number | 5s |
Online published | 7 Jun 2023 |
Publication status | Published - Oct 2023 |
Link(s)
Abstract
Deep neural networks (DNNs) are widely used for computer vision tasks. However, it has been shown that deep models are vulnerable to adversarial attacks - that is, their performances drop when imperceptible perturbations are made to the original inputs, which may further degrade the following visual tasks or introduce new problems such as data and privacy security. Hence, metrics for evaluating the robustness of deep models against adversarial attacks are desired. However, previous metrics are mainly proposed for evaluating the adversarial robustness of shallow networks on the small-scale datasets. Although the Cross Lipschitz Extreme Value for nEtwork Robustness (CLEVER) metric has been proposed for large-scale datasets (e.g., the ImageNet dataset), it is computationally expensive and its performance relies on a tractable number of samples. In this article, we propose the Adversarial Converging Time Score (ACTS), an attack-dependent metric that quantifies the adversarial robustness of a DNN on a specific input. Our key observation is that local neighborhoods on a DNN's output surface would have different shapes given different inputs. Hence, given different inputs, it requires different time for converging to an adversarial sample. Based on this geometry meaning, the ACTS measures the converging time as an adversarial robustness metric. We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset using state-of-the-art deep networks. Extensive experiments show that our ACTS metric is an efficient and effective adversarial metric over the previous CLEVER metric. © 2023 Copyright held by the owner/author(s).
Research Area(s)
- Adversarial robustness, deep neural network (DNN), image classification
Bibliographic Note
Full text of this publication does not contain sufficient affiliation information. With consent from the author(s) concerned, the Research Unit(s) information for this record is based on the existing academic department affiliation of the author(s).
Citation Format(s)
A Geometrical Approach to Evaluate the Adversarial Robustness of Deep Neural Networks. / WANG, Yang; DONG, Bo; XU, Ke et al.
In: ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 19, No. 5s, 172, 10.2023.
In: ACM Transactions on Multimedia Computing, Communications and Applications, Vol. 19, No. 5s, 172, 10.2023.
Research output: Journal Publications and Reviews (RGC: 21, 22, 62) › 21_Publication in refereed journal › peer-review