TY - JOUR
T1 - DeepMTD
T2 - Moving Target Defense for Deep Visual Sensing against Adversarial Examples
AU - Song, Qun
AU - Yan, Zhenyu
AU - Tan, Rui
PY - 2022/2
Y1 - 2022/2
N2 - Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversarial examples such as small stickers pasted on the road signs and lanes have been shown effective in misleading advanced driver-assistance systems. Most existing countermeasures against adversarial examples build their security on the attackers' ignorance of the defense mechanisms. Thus, they fall short of following Kerckhoffs's principle and can be subverted once the attackers know the details of the defense. This article applies the strategy of moving target defense (MTD) to generate multiple new deep models after system deployment that will collaboratively detect and thwart adversarial examples. Our MTD design is based on the adversarial examples' minor transferability across different models. The post-deployment of dynamically generated models significantly increase the bar of successful attacks. We also apply serial data fusion with early stopping to reduce the inference time by a factor of up to 5, as well as exploit hardware inference accelerators' characteristics to strike better tradeoffs between inference time and power consumption. Evaluation based on three datasets including a road sign dataset and two GPU-equipped embedded computing boards shows the effectiveness and efficiency of our approach in counteracting the attack. © 2021 Association for Computing Machinery.
AB - Deep learning-based visual sensing has achieved attractive accuracy but is shown vulnerable to adversarial attacks. Specifically, once the attackers obtain the deep model, they can construct adversarial examples to mislead the model to yield wrong classification results. Deployable adversarial examples such as small stickers pasted on the road signs and lanes have been shown effective in misleading advanced driver-assistance systems. Most existing countermeasures against adversarial examples build their security on the attackers' ignorance of the defense mechanisms. Thus, they fall short of following Kerckhoffs's principle and can be subverted once the attackers know the details of the defense. This article applies the strategy of moving target defense (MTD) to generate multiple new deep models after system deployment that will collaboratively detect and thwart adversarial examples. Our MTD design is based on the adversarial examples' minor transferability across different models. The post-deployment of dynamically generated models significantly increase the bar of successful attacks. We also apply serial data fusion with early stopping to reduce the inference time by a factor of up to 5, as well as exploit hardware inference accelerators' characteristics to strike better tradeoffs between inference time and power consumption. Evaluation based on three datasets including a road sign dataset and two GPU-equipped embedded computing boards shows the effectiveness and efficiency of our approach in counteracting the attack. © 2021 Association for Computing Machinery.
KW - adversarial examples
KW - Deep neural networks
KW - embedded computer vision
KW - moving target defense
UR - http://www.scopus.com/inward/record.url?scp=85134060235&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85134060235&origin=recordpage
U2 - 10.1145/3469032
DO - 10.1145/3469032
M3 - RGC 21 - Publication in refereed journal
SN - 1550-4859
VL - 18
JO - ACM Transactions on Sensor Networks
JF - ACM Transactions on Sensor Networks
IS - 1
M1 - 5
ER -