TY - JOUR
T1 - Artwork protection against unauthorized neural style transfer and aesthetic color distance metric
AU - Guo, Zhongliang
AU - Qian, Yifei
AU - Zhao, Shuai
AU - Dong, Junhao
AU - Li, Yanli
AU - Arandjelović, Ognjen
AU - Fang, Lei
AU - Lau, Chun Pong
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/7/8
Y1 - 2025/7/8
N2 - Neural style transfer (NST) generates new images by combining the style of one image with the content of another. However, unauthorized NST can exploit artwork, raising concerns about artists’ rights and motivating the development of proactive protection methods. We propose Locally Adaptive Adversarial Color Attack (LAACA), enabling artists to conveniently protect their work from unauthorized NST by pre-processing the artwork image before public release, providing content-independent protection regardless of which content image it may later be combined with. LAACA introduces adaptive perturbations that significantly degrade NST quality while maintaining the visual integrity of the original image. We also develope LAACAv2, which resists the current SOTA adversarial perturbation removal method — SDEdit-based adversarial purification. Additionally, we introduce the Aesthetic Color Distance Metric (ACDM) to better evaluate color-sensitive tasks like NST. Extensive experiments across various NST techniques demonstrate our methods outperform baselines in structural similarity, color preservation, and perceptual quality. User studies with both general users and art experts confirm the practical applicability of our approach, addressing the social trust crisis in the art community while advancing adversarial machine learning at the intersection of art, technology, and intellectual property rights. © 2025 Elsevier Ltd.
AB - Neural style transfer (NST) generates new images by combining the style of one image with the content of another. However, unauthorized NST can exploit artwork, raising concerns about artists’ rights and motivating the development of proactive protection methods. We propose Locally Adaptive Adversarial Color Attack (LAACA), enabling artists to conveniently protect their work from unauthorized NST by pre-processing the artwork image before public release, providing content-independent protection regardless of which content image it may later be combined with. LAACA introduces adaptive perturbations that significantly degrade NST quality while maintaining the visual integrity of the original image. We also develope LAACAv2, which resists the current SOTA adversarial perturbation removal method — SDEdit-based adversarial purification. Additionally, we introduce the Aesthetic Color Distance Metric (ACDM) to better evaluate color-sensitive tasks like NST. Extensive experiments across various NST techniques demonstrate our methods outperform baselines in structural similarity, color preservation, and perceptual quality. User studies with both general users and art experts confirm the practical applicability of our approach, addressing the social trust crisis in the art community while advancing adversarial machine learning at the intersection of art, technology, and intellectual property rights. © 2025 Elsevier Ltd.
KW - Adversarial sample
KW - Benign adversarial attack
KW - Image quality assessment
KW - Neural style transfer
UR - http://www.scopus.com/inward/record.url?scp=105010675830&partnerID=8YFLogxK
UR - https://www.scopus.com/record/pubmetrics.uri?eid=2-s2.0-105010675830&origin=recordpage
U2 - 10.1016/j.patcog.2025.112105
DO - 10.1016/j.patcog.2025.112105
M3 - RGC 21 - Publication in refereed journal
AN - SCOPUS:105010675830
SN - 0031-3203
VL - 171
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 112105
ER -