TY - JOUR
T1 - Poison Ink
T2 - Robust and Invisible Backdoor Attack
AU - Zhang, Jie
AU - Chen, Dongdong
AU - Huang, Qidong
AU - Liao, Jing
AU - Zhang, Weiming
AU - Feng, Huamin
AU - Hua, Gang
AU - Yu, Nenghai
PY - 2022
Y1 - 2022
N2 - Recent research shows deep neural networks are vulnerable to different types of attacks, such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them, backdoor attacks are the most cunning and can occur in almost every stage of the deep learning pipeline. Backdoor attacks have attracted lots of interest from both academia and industry. However, most existing backdoor attack methods are visible or fragile to some effortless pre-processing such as common data transformations. To address these limitations, we propose a robust and invisible backdoor attack called "Poison Ink". Concretely, we first leverage the image structures as target poisoning areas and fill them with poison ink (information) to generate the trigger pattern. As the image structure can keep its semantic meaning during the data transformation, such a trigger pattern is inherently robust to data transformations. Then we leverage a deep injection network to embed such inputaware trigger pattern into the cover image to achieve stealthiness. Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness. Through extensive experiments, we demonstrate that Poison Ink is not only general to different datasets and network architectures but also flexible for different attack scenarios. Besides, it also has very strong resistance against many state-of-the-art defense techniques.
AB - Recent research shows deep neural networks are vulnerable to different types of attacks, such as adversarial attacks, data poisoning attacks, and backdoor attacks. Among them, backdoor attacks are the most cunning and can occur in almost every stage of the deep learning pipeline. Backdoor attacks have attracted lots of interest from both academia and industry. However, most existing backdoor attack methods are visible or fragile to some effortless pre-processing such as common data transformations. To address these limitations, we propose a robust and invisible backdoor attack called "Poison Ink". Concretely, we first leverage the image structures as target poisoning areas and fill them with poison ink (information) to generate the trigger pattern. As the image structure can keep its semantic meaning during the data transformation, such a trigger pattern is inherently robust to data transformations. Then we leverage a deep injection network to embed such inputaware trigger pattern into the cover image to achieve stealthiness. Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness. Through extensive experiments, we demonstrate that Poison Ink is not only general to different datasets and network architectures but also flexible for different attack scenarios. Besides, it also has very strong resistance against many state-of-the-art defense techniques.
KW - Backdoor attack
KW - flexibility
KW - generality
KW - robustness
KW - stealthiness
UR - http://www.scopus.com/inward/record.url?scp=85137162016&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-85137162016&origin=recordpage
U2 - 10.1109/TIP.2022.3201472
DO - 10.1109/TIP.2022.3201472
M3 - RGC 21 - Publication in refereed journal
C2 - 36040942
SN - 1057-7149
VL - 31
SP - 5691
EP - 5705
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -