ES Attack : Model Stealing Against Deep Neural Networks Without Data Hurdles

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

25 Scopus Citations
View graph of relations

Author(s)

Detail(s)

Original languageEnglish
Pages (from-to)1258-1270
Journal / PublicationIEEE Transactions on Emerging Topics in Computational Intelligence
Volume6
Issue number5
Online published3 Mar 2022
Publication statusPublished - Oct 2022
Externally publishedYes

Abstract

Deep neural networks (DNNs) have become the essential components for various commercialized machine learning services, such as Machine Learning as a Service (MLaaS). Recent studies show that machine learning services face severe privacy threats - well-trained DNNs owned by MLaaS providers can be stolen through public APIs, namely model stealing attacks. However, most existing works undervalued the impact of such attacks, where a successful attack has to acquire confidential training data or auxiliary data regarding the victim DNN. In this paper, we propose ES Attack, a novel model stealing attack without any data hurdles. By using heuristically generated synthetic data, ES Attack iteratively trains a substitute model and eventually achieves a functionally equivalent copy of the victim DNN. The experimental results reveal the severity of ES Attack: i) ES Attack successfully steals the victim model without data hurdles, and ES Attack even outperforms most existing model stealing attacks using auxiliary data in terms of model accuracy; ii) most countermeasures are ineffective in defending ES Attack; iii) ES Attack facilitates further attacks relying on the stolen model.

Research Area(s)

  • Computational modeling, Convolution, Data models, data synthesis, deep neural network, Generators, knowledge distillation, Model stealing, Predictive models, Space exploration, Training

Citation Format(s)

ES Attack: Model Stealing Against Deep Neural Networks Without Data Hurdles. / Yuan, Xiaoyong; Ding, Leah; Zhang, Lan et al.
In: IEEE Transactions on Emerging Topics in Computational Intelligence, Vol. 6, No. 5, 10.2022, p. 1258-1270.

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review