Scaling Camouflage : Content Disguising Attack Against Computer Vision Applications

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations

Author(s)

  • Chao Shen
  • Qixue Xiao
  • Kang Li
  • Yu Chen

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)2017-2028
Journal / PublicationIEEE Transactions on Dependable and Secure Computing
Volume18
Issue number5
Online published4 Feb 2020
Publication statusPublished - Sep 2021

Abstract

Recently, deep neural networks have achieved state-of-the-art performance in multiple computer vision tasks, and become core parts of computer vision applications. In most of their implementations, a standard input preprocessing component called image scaling is embedded, in order to resize the original data to match the input size of pre-trained neural networks. This paper demonstrates content disguising attacks by exploiting the image scaling procedure, which cause machine's extracted content to be dramatically dissimilar with that before scaled. Different from previous adversarial attacks, our attacks happen in the data preprocessing stage, and hence they are not subject to specific machine learning models. To achieve a better deceiving and disguising effect, we propose and implement three feasible attack approaches with L0-, L2- and L-norm distance metrics. We have conducted a comprehensive evaluation on various image classification applications, including three local demos and two remote proprietary services. We also investigate the attack effects on a YOLO-v3 object detection demo. Our experimental results demonstrate successful content disguising against all of them, which validate our approaches are practical.

Research Area(s)

  • adversarial examples, computer vision, Content disguising, deep learning, image scaling