Abstract
Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide backdoor triggers in DNNs by poisoning training data. A backdoored model behaves normally on clean test images, yet consistently predicts a particular target class for any test examples that contain the trigger pattern. As such, backdoor attacks are hard to detect, and have raised severe security concerns in real-world applications. Thus far, backdoor research has mostly been conducted in the image domain with image classification models. In this paper, we show that existing image backdoor attacks are far less effective on videos, and outline 4 strict conditions where existing attacks are likely to fail: 1) scenarios with more input dimensions (eg. videos), 2) scenarios with high resolution, 3) scenarios with a large number of classes and few examples per class (a "sparse dataset"), and 4) attacks with access to correct labels (eg. clean-label attacks). We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions. We show on benchmark video datasets that our proposed backdoor attack can manipulate state-of-the-art video models with high success rates by poisoning only a small proportion of training data (without changing the labels). We also show that our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods, and can even be applied to improve image backdoor attacks. Our proposed video backdoor attack not only serves as a strong baseline for improving the robustness of video models, but also provides a new perspective for more understanding more powerful backdoor attacks. © 2020 IEEE
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) |
| Publisher | IEEE |
| Pages | 14443-14452 |
| ISBN (Electronic) | 978-1-7281-7168-5 |
| ISBN (Print) | 978-1-7281-7169-2 |
| DOIs | |
| Publication status | Published - 2020 |
| Externally published | Yes |
| Event | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) - Virtual, Seattle, United States Duration: 13 Jun 2020 → 19 Jun 2020 http://cvpr2020.thecvf.com/ http://openaccess.thecvf.com/content_CVPR_2020/html/Guo_Zero-Reference_Deep_Curve_Estimation_for_Low-Light_Image_Enhancement_CVPR_2020_paper.html https://ieeexplore.ieee.org/xpl/conhome/9142308/proceeding http://cvpr2021.thecvf.com/ https://ieeexplore.ieee.org/xpl/conhome/1000147/all-proceedings https://openaccess.thecvf.com/CVPR2021 |
Conference
| Conference | 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020) |
|---|---|
| Abbreviated title | CVPR2020 |
| Place | United States |
| City | Seattle |
| Period | 13/06/20 → 19/06/20 |
| Internet address |
|
Research Keywords
- Backdoor Attacks
- Video Recognition Models
Fingerprint
Dive into the research topics of 'Clean-Label Backdoor Attacks on Video Recognition Models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver