Optimized Client-side Detection of Model Poisoning Attacks in Federated learning
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review
Author(s)
Related Research Unit(s)
Detail(s)
Original language | English |
---|---|
Title of host publication | Proceedings - 24th IEEE International Conference on High Performance Computing and Communications; 8th IEEE International Conference on Data Science and Systems; 20th IEEE International Conference on Smart City; 8th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (HPCC/DSS/SmartCity/DependSys 2022) |
Publisher | Institute of Electrical and Electronics Engineers, Inc. |
Pages | 1196-1201 |
ISBN (electronic) | 979-8-3503-1993-4 |
Publication status | Published - Dec 2022 |
Conference
Title | 24th IEEE International Conference on High Performance Computing and Communications, 8th IEEE International Conference on Data Science and Systems, 20th IEEE International Conference on Smart City and 8th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application, HPCC/DSS/SmartCity/DependSys 2022 |
---|---|
Place | China |
City | Chengdu |
Period | 18 - 20 December 2022 |
Link(s)
Abstract
Recent studies have shown that federated learning is vulnerable to a new type of poisoning attack, called model poisoning attack. One or more malicious clients send crafted local model updates to the server to poison the global model. The training data between federated learning clients is non-IID, which makes the updates submitted by the clients various. The poisoned update from the malicious client can be hidden between various benign updates. Many anomaly detection mechanisms are limited. Client detection is a flexible detection method suitable for non-IID data distribution in federated learning. The core idea of client-side detection is to evaluate the model using various client-side data, which is used for training and detecting model poisoning attacks. However, the effectiveness of this scheme depends on the authenticity of the report returned by the client. Malicious clients can return false reports to evade detection. In this paper, we adopt the idea of group testing and use the COMP algorithm to improve the detection process. We conduct experiments in settings with different proportions of malicious clients. Experimental results show that our scheme can tolerate a higher proportion of malicious clients. In the CIFAR-10 based semantic backdoor attack, our scheme is effective when the proportion of malicious clients is 20%. In MNIST-based semantic backdoor attacks, our scheme is effective when the proportion of malicious clients is 25%. © 2022 IEEE.
Research Area(s)
- Federated learning, group testing, Model poisoning attack
Citation Format(s)
Optimized Client-side Detection of Model Poisoning Attacks in Federated learning. / Zhang, Guoxi; Shu, Jiangang; Jia, Xiaohua.
Proceedings - 24th IEEE International Conference on High Performance Computing and Communications; 8th IEEE International Conference on Data Science and Systems; 20th IEEE International Conference on Smart City; 8th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (HPCC/DSS/SmartCity/DependSys 2022). Institute of Electrical and Electronics Engineers, Inc., 2022. p. 1196-1201.
Proceedings - 24th IEEE International Conference on High Performance Computing and Communications; 8th IEEE International Conference on Data Science and Systems; 20th IEEE International Conference on Smart City; 8th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (HPCC/DSS/SmartCity/DependSys 2022). Institute of Electrical and Electronics Engineers, Inc., 2022. p. 1196-1201.
Research output: Chapters, Conference Papers, Creative and Literary Works › RGC 32 - Refereed conference paper (with host publication) › peer-review