Verifying in the Dark : Verifiable Machine Unlearning by Using Invisible Backdoor Triggers

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

View graph of relations


Related Research Unit(s)


Original languageEnglish
Pages (from-to)708-721
Journal / PublicationIEEE Transactions on Information Forensics and Security
Online published27 Oct 2023
Publication statusPublished - 2024


Machine unlearning as a fundamental requirement in Machine-Learning-as-a-Service (MLaaS) has been extensively studied with increasing concerns about data privacy. It requires MLaaS providers should delete training data upon user requests. Unfortunately, none of the existing studies can efficiently achieve machine unlearning validation while preserving the retraining efficiency and the service quality after data deletion. Besides, how to craft the validation scheme to prevent providers from spoofing validation by forging proofs remains under-explored. In this paper, we introduce a backdoor-assisted validation scheme for machine unlearning. The proposed design is built from the ingenious combination of backdoor triggers and incremental learning to assist users in verifying proofs of machine unlearning without compromising performance and service quality. We propose to embed invisible markers based on backdoor triggers into privacy-sensitive data to prevent MLaaS providers from distinguishing poisoned data for validation spoofing. Users can use prediction results to determine whether providers comply with data deletion requests. Besides, we incorporate our validation scheme into an efficient incremental learning approach via our index structure to further facilitate the performance of retraining after data deletion. Evaluation results on real-world datasets confirm the efficiency and effectiveness of our proposed verifiable machine unlearning scheme. © 2023 IEEE.

Research Area(s)

  • backdoor attacks, incremental learning, Machine unlearning, ML-as-a-service