arXiv Analytics

Sign in

arXiv:2205.11736 [cs.LG]AbstractReferencesReviewsResources

Towards a Defense against Backdoor Attacks in Continual Federated Learning

Shuaiqi Wang, Jonathan Hayase, Giulia Fanti, Sewoong Oh

Published 2022-05-24Version 1

Backdoor attacks are a major concern in federated learning (FL) pipelines where training data is sourced from untrusted clients over long periods of time (i.e., continual learning). Preventing such attacks is difficult because defenders in FL do not have access to raw training data. Moreover, in a phenomenon we call backdoor leakage, models trained continuously eventually suffer from backdoors due to cumulative errors in backdoor defense mechanisms. We propose a novel framework for defending against backdoor attacks in the federated continual learning setting. Our framework trains two models in parallel: a backbone model and a shadow model. The backbone is trained without any defense mechanism to obtain good performance on the main task. The shadow model combines recent ideas from robust covariance estimation-based filters with early-stopping to control the attack success rate even as the data distribution changes. We provide theoretical motivation for this design and show experimentally that our framework significantly improves upon existing defenses against backdoor attacks.

Related articles: Most relevant | Search more
arXiv:1708.03366 [cs.LG] (Published 2017-08-10)
Resilient Linear Classification: An Approach to Deal with Attacks on Training Data
arXiv:2310.05862 [cs.LG] (Published 2023-10-05, updated 2024-06-10)
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks
arXiv:2310.11594 [cs.LG] (Published 2023-10-17)
Adversarial Robustness Unhardening via Backdoor Attacks in Federated Learning