Abstract
Image scaling algorithms are intended to preserve the visual features before and after scaling, which is commonly used in numerous visual and image processing applications. In this paper, we demonstrate an automated attack against common scaling algorithms, i.e. to automatically generate camouflage images whose visual semantics change dramatically after scaling. To illustrate the threats from such camouflage attacks, we choose several computer vision applications as targeted victims, including multiple image classification applications based on popular deep learning frameworks, as well as main-stream web browsers. Our experimental results show that such attacks can cause different visual results after scaling and thus create evasion or data poisoning effect to these victim applications. We also present an algorithm that can successfully enable attacks against famous cloud-based image services (such as those from Microsoft Azure, Aliyun, Baidu, and Tencent) and cause obvious misclassification effects, even when the details of image processing (such as the exact scaling algorithm and scale dimension parameters) are hidden in the cloud. To defend against such attacks, this paper suggests a few potential countermeasures from attack prevention to detection.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 28th USENIX Security Symposium |
| Pages | 443-460 |
| ISBN (Electronic) | 978-1-939133-06-9 |
| Publication status | Published - 2019 |
| Externally published | Yes |
| Event | 28th USENIX Security Symposium (USENIX Security ’19) - Santa Clara, United States Duration: 14 Aug 2019 → 16 Aug 2019 https://www.usenix.org/conference/usenixsecurity19 |
Conference
| Conference | 28th USENIX Security Symposium (USENIX Security ’19) |
|---|---|
| Place | United States |
| City | Santa Clara |
| Period | 14/08/19 → 16/08/19 |
| Internet address |