Differential privacy in deep learning : Privacy and beyond

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)408-424
Journal / PublicationFuture Generation Computer Systems
Volume148
Online published12 Jun 2023
Publication statusPublished - Nov 2023

Abstract

Motivated by the security risks of deep neural networks, such as various membership and attribute inference attacks, differential privacy has emerged as a promising approach for protecting the privacy of neural networks. As a result, it is crucial to investigate the frontier intersection of differential privacy and deep learning, which is the main motivation behind this survey. Most of the current research in this field focuses on developing mechanisms for combining differentially private perturbations with deep learning frameworks. We provide a detailed summary of these works and analyze potential areas for improvement in the near future. In addition to privacy protection, differential privacy can also play other critical roles in deep learning, such as fairness, robustness, and prevention of over-fitting, which have not been thoroughly explored in previous research. Accordingly, we also discuss future research directions in these areas to offer practical suggestions for future studies. © 2023 Elsevier B.V.

Research Area(s)

  • Deep learning, Differential privacy, Fairness, Lower bound, Robustness, Stochastic gradient descent

Citation Format(s)

Differential privacy in deep learning: Privacy and beyond. / Wang, Yanling; Wang, Qian; Zhao, Lingchen et al.
In: Future Generation Computer Systems, Vol. 148, 11.2023, p. 408-424.

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review