Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Pages (from-to) | 13054-13067 |
Journal / Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 45 |
Issue number | 11 |
Online published | 19 Jun 2023 |
Publication status | Published - 1 Nov 2023 |
Link(s)
Abstract
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks. However, models trained with AT sacrifice standard accuracy and do not generalize well to unseen attacks. Some examples of recent works show generalization improvement with adversarial samples under unseen threat models are, on-manifold threat model or neural perceptual threat model. However, the former requires exact manifold information while the latter requires algorithm relaxation. Motivated by these considerations, we propose a novel threat model called Joint Space Threat Model (JSTM), which exploits the underlying manifold information with Normalizing Flow, ensuring that the exact manifold assumption holds. Under JSTM, we develop novel adversarial attacks and defenses. Specifically, we propose the Robust Mixup strategy in which we maximize the adversity of the interpolated images and gain robustness and prevent overfitting. Our experiments show that Interpolated Joint Space Adversarial Training (IJSAT) achieves good performance in standard accuracy, robustness, and generalization. IJSAT is also flexible and can be used as a data augmentation method to improve standard accuracy and combined with many existing AT approaches can improve robustness. We demonstrate the effectiveness of our approach on three benchmark datasets, CIFAR-10/100, OM-ImageNet and CIFAR-10-C. © 2023 IEEE.
Research Area(s)
- Adversarial Defense, Adversarial Robustness, Computational modeling, Data models, Generative Models, Image Classification, Manifolds, Robustness, Standards, Threat modeling, Training
Bibliographic Note
Citation Format(s)
In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 45, No. 11, 01.11.2023, p. 13054-13067.
Research output: Journal Publications and Reviews › RGC 21 - Publication in refereed journal › peer-review