Exploring Clustered Federated Learning’s Vulnerability against Property Inference Attack

Hyunjun Kim, Yungi Cho, Younghan Lee, Ho Bae, Yunheung Paek

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Clustered federated learning (CFL) is an advanced technique in the field of federated learning (FL) that addresses the issue of catastrophic forgetting caused by non-independent and identically distributed (non-IID) datasets. CFL achieves this by clustering clients based on the similarity of their datasets and training a global model for each cluster. Despite the effectiveness of CFL in mitigating performance degradation resulting from non-IID datasets, the potential risk of privacy leakages in CFL has not been thoroughly studied. Previous work evaluated the risk of privacy leakages in FL using the property inference attack (PIA), which extracts information about unintended properties (i.e., attributes that differ from the target attribute of the global model’s main task). In this paper, we explore the potential risk of unintended property leakage in CFL by subjecting it to both passive and active PIAs. Our empirical analysis shows that the passive PIA performance on CFL is substantially better than that on FL in terms of the attack AUC score. Moreover, we propose an enhanced active PIA method tailored for CFL to improve the attack performance. Our method introduces a scale-up parameter that amplifies the impact of malicious local updates, resulting in better performance than the previous technique. Furthermore, we demonstrate that the vulnerability of CFL can be alleviated by applying differential privacy (DP) mechanisms at the client-level. Unlike previous works, which have shown that applying DP to FL can induce a high utility loss, our empirical results indicate that DP can be used as a defense mechanism in CFL, leading to a better trade-off between privacy and utility.

Original languageEnglish
Title of host publicationProceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2023
PublisherAssociation for Computing Machinery
Pages236-249
Number of pages14
ISBN (Electronic)9798400707650
DOIs
StatePublished - 16 Oct 2023
Event26th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2023 - Hong Kong, China
Duration: 16 Oct 202318 Oct 2023

Publication series

NameACM International Conference Proceeding Series

Conference

Conference26th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2023
Country/TerritoryChina
CityHong Kong
Period16/10/2318/10/23

Bibliographical note

Publisher Copyright:
© 2023 Copyright held by the owner/author(s).

Keywords

  • clustered federated learning
  • differential privacy
  • property inference attack

Fingerprint

Dive into the research topics of 'Exploring Clustered Federated Learning’s Vulnerability against Property Inference Attack'. Together they form a unique fingerprint.

Cite this