Abstract
Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different ℓp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various ℓp norms of the distortion, including ℓ0, ℓ1, ℓ2, and ℓ∞ norms. Thus, the proposed general framework unifies the methods of crafting ℓ0, ℓ1, ℓ2, and ℓ∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods. © 2019 Association for Computing Machinery.
| Original language | English |
|---|---|
| Title of host publication | ASP-DAC 2019 - 24th Asia and South Pacific Design Automation Conference |
| Publisher | IEEE |
| Pages | 538-543 |
| ISBN (Print) | 9781450360074 |
| DOIs | |
| Publication status | Published - 21 Jan 2019 |
| Externally published | Yes |
| Event | 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019 - Tokyo, Japan Duration: 21 Jan 2019 → 24 Jan 2019 |
Publication series
| Name | Proceedings of the Asia and South Pacific Design Automation Conference, ASP-DAC |
|---|
Conference
| Conference | 24th Asia and South Pacific Design Automation Conference, ASPDAC 2019 |
|---|---|
| Place | Japan |
| City | Tokyo |
| Period | 21/01/19 → 24/01/19 |
Bibliographical note
Publication details (e.g. title, author(s), publication statuses and dates) are captured on an “AS IS” and “AS AVAILABLE” basis at the time of record harvesting from the data source. Suggestions for further amendments or supplementary information can be sent to <a href="mailto:[email protected]">[email protected]</a>.Funding
This work is partly supported by the National Science Foundation (CCF-1733701,CNS-1704662,andCNS1739748),AirForceResearchLaboratoryFA8750-18-20058,andU.S.OfficeofNavalResearch.
Fingerprint
Dive into the research topics of 'ADMM attack: An enhanced adversarial attack for deep neural networks with undetectable distortions'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver