Skip to main navigation Skip to search Skip to main content

Privacy-Preserving Universal Adversarial Defense for Black-Box Models

Qiao Li, Cong Wu*, Jing Chen, Zijun Zhang, Kun He, Ruiying Du, Xinxin Wang, Qingchuan Zhao, Yang Liu

*Corresponding author for this work

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

Abstract

Deep neural networks (DNNs) are increasingly used in critical applications such as identity authentication and autonomous driving, where robustness against adversarial attacks is crucial. These attacks can exploit minor perturbations to cause significant prediction errors, making it essential to enhance the resilience of DNNs. Traditional defense methods often rely on access to detailed model information, which raises privacy concerns, as model owners may be reluctant to share such data. In contrast, existing black-box defense methods fail to offer a universal defense against various types of adversarial attacks. To address these challenges, we introduce DUCD, a universal black-box defense method that does not require access to the target model's parameters or architecture. Our approach involves distilling the target model by querying it with data, creating a white-box surrogate while preserving data privacy. We further enhance this surrogate model using a certified defense based on randomized smoothing and optimized noise selection, enabling robust defense against a broad range of adversarial attacks. Comparative evaluations between the certified defenses of the surrogate and target models demonstrate the effectiveness of our approach. Experiments on multiple image classification datasets show that DUCD not only outperforms existing black-box defenses but also matches the accuracy of white-box defenses, all while enhancing data privacy and reducing the success rate of membership inference attacks. © 2025 IEEE.
Original languageEnglish
Pages (from-to)11503-11515
JournalIEEE Transactions on Information Forensics and Security
Volume20
Online published11 Sept 2025
DOIs
Publication statusPublished - 2025

Funding

This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFB3103300.

Research Keywords

  • black-box model
  • Deep neural network
  • randomized smoothing
  • surrogate model
  • universal defense

Fingerprint

Dive into the research topics of 'Privacy-Preserving Universal Adversarial Defense for Black-Box Models'. Together they form a unique fingerprint.

Cite this