FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models

Younghan Lee, Yungi Cho, Woorim Han, Ho Bae, Yunheung Paek

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated Learning (FL) thrives in training a global model with numerous clients by only sharing the parameters of their local models trained with their private training datasets. Therefore, without revealing the private dataset, the clients can obtain a deep learning (DL) model with high performance. However, recent research proposed poisoning attacks that cause a catastrophic loss in the accuracy of the global model when adversaries, posed as benign clients, are present in a group of clients. Therefore, recent studies suggested byzantine-robust FL methods that allow the server to train an accurate global model even with the adversaries present in the system. However, many existing methods require the knowledge of the number of malicious clients or the auxiliary (clean) dataset or the effectiveness reportedly decreased hugely when the private dataset was non-independently and identically distributed (non-IID). In this work, we propose FLGuard, a novel byzantine-robust FL method that detects malicious clients and discards malicious local updates by utilizing the contrastive learning technique, which showed a tremendous improvement as a self-supervised learning method. With contrastive models, we design FLGuard as an ensemble scheme to maximize the defensive capability. We evaluate FLGuard extensively under various poisoning attacks and compare the accuracy of the global model with existing byzantine-robust FL methods. FLGuard outperforms the state-of-the-art defense methods in most cases and shows drastic improvement, especially in non-IID settings. https://github.com/201younghanlee/FLGuard.

Original languageEnglish
Title of host publicationComputer Security – ESORICS 2023 - 28th European Symposium on Research in Computer Security, The Hague, The Netherlands, September 25–29, 2023, Proceedings
EditorsGene Tsudik, Mauro Conti, Kaitai Liang, Georgios Smaragdakis
PublisherSpringer Science and Business Media Deutschland GmbH
Pages65-84
Number of pages20
ISBN (Print)9783031514814
DOIs
StatePublished - 2024
Event28th European Symposium on Research in Computer Security, ESORICS 2023 - The Hague, Netherlands
Duration: 25 Sep 202329 Sep 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14347 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference28th European Symposium on Research in Computer Security, ESORICS 2023
Country/TerritoryNetherlands
CityThe Hague
Period25/09/2329/09/23

Bibliographical note

Publisher Copyright:
© 2024, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Keywords

  • Byzantine-Robust
  • Contrastive Learning
  • Federated Learning
  • Poisoning Attacks

Fingerprint

Dive into the research topics of 'FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models'. Together they form a unique fingerprint.

Cite this