APFed : Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning

Research output: Journal Publications and ReviewsRGC 21 - Publication in refereed journalpeer-review

3 Scopus Citations
View graph of relations

Author(s)

Related Research Unit(s)

Detail(s)

Original languageEnglish
Pages (from-to)5749-5761
Journal / PublicationIEEE Transactions on Information Forensics and Security
Volume18
Online published13 Sept 2023
Publication statusPublished - 2023

Abstract

Federated learning (FL) is an emerging paradigm of privacy-preserving distributed machine learning that effectively deals with the privacy leakage problem by utilizing cryptographic primitives. However, how to prevent poisoning attacks in distributed situations has recently become a major FL concern. Indeed, an adversary can manipulate multiple edge nodes and submit malicious gradients to disturb the global model's availability. Currently, most existing works rely on an Independently Identical Distribution (IID) situation and identify malicious gradients using plaintext. However, we demonstrates that current works cannot handle the data heterogeneity scenario challenges and that publishing unencrypted gradients imposes significant privacy leakage problems. Therefore, we develop APFed, a layered privacy-preserving defense mechanism that significantly mitigates the effects of poisoning attacks in data heterogeneity scenarios. Specifically, we exploit HE as the underlying technique and employ the median coordinate as the benchmark. Subsequently, we propose a secure cosine similarity scheme to identify poisonous gradients, and we innovatively use clustering as part of the defense mechanism and develop a hierarchical aggregation that enhances our scheme's robustness in IID and non-IID scenarios. Extensive evaluations on two benchmark datasets demonstrate that APFed outperforms existing defense strategies while reducing the communication overhead by replacing the expensive remote communication method with inexpensive intra-cluster communication. © 2023 IEEE.

Research Area(s)

  • defense strategy, Federated learning, poisoning attack, privacy-preserving