Defeating Misclassification Attacks Against Transfer Learning

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

  • Bang Wu
  • Shuo Wang
  • Xingliang Yuan
  • Carsten Rudolph
  • Xiangwen Yang

Related Research Unit(s)

Detail(s)

Original languageEnglish
Journal / PublicationIEEE Transactions on Dependable and Secure Computing
Online published25 Jan 2022
Publication statusOnline published - 25 Jan 2022

Abstract

Transfer learning is prevalent as a technique to efficiently generate new models (Student models) based on the knowledge transferred from a pre-trained model (Teacher model). However, Teacher models are often publicly available for sharing and reuse, which inevitably introduces vulnerability to trigger severe attacks against transfer learning systems. In this paper, we take a first step towards mitigating one of the most advanced misclassification attacks in transfer learning. We design a distilled \emph{differentiator} via activation-based network pruning to enervate the attack transferability while retaining accuracy. We adopt an ensemble structure from variant differentiators to improve the defence robustness. To avoid the bloated ensemble size during inference, we propose two-phase defence, in which inference from the Student model is firstly performed to narrow down the candidate differentiators to be assembled, and later only a small, fixed number of them can be chosen to validate clean or reject adversarial inputs effectively. Our comprehensive evaluations on both large and small image recognition tasks confirm that the Student models with our defence of only 5 differentiators immune over 90\% the adversarial inputs with accuracy loss less than 10\%. Our comparison also demonstrates that our design outperforms prior problematic defences.

Research Area(s)

  • Computational modeling, Data models, Deep neural network, Defence against adversarial examples, Mathematical models, Perturbation methods, Pre-trained model, Task analysis, Training, Transfer learning

Citation Format(s)