When Deep Learning Meets Differential Privacy : Privacy, Security, and More

Research output: Journal Publications and Reviews (RGC: 21, 22, 62)21_Publication in refereed journalpeer-review

2 Scopus Citations
View graph of relations


Related Research Unit(s)


Original languageEnglish
Pages (from-to)148-155
Journal / PublicationIEEE Network
Issue number6
Publication statusPublished - Nov 2021


Over the past decade, we have witnessed unprecedented development in deep learning (DL) and its contributions to modern networking systems. Along with its wide adoption, however, are growing concerns over the broad attack surfaces toward learning systems and the intrinsic vulnerabilities on privacy, security, robustness, and more. As a countermeasure to mitigate the threats or formalize a better defense, a widely adopted approach is to introduce a certain level of random perturbation (a.k.a. calibrated artificial noise) at either the training or prediction phase. Noteworthy examples include effective defenses against model inference attacks and notions of certified robustness. As such, differential privacy (DP), originally established as a privacy-preserving framework for data publishing, has drawn great interest from the learning community. Given a target utility and the acceptable trade-off, DP's formalization on the amount of noise needed has been shown to be widely applicable to a broad range of DL vulnerability mitigations. In this article, we present to our readers the recent representative advancements intersecting DL and DP, ranging from privacy enhancements for DL systems to security and robustness improvements and other novel extensions. Furthermore, we discuss the ongoing challenges and propose a number of future directions where DP has great potential to positively contribute to future DL systems.