arXiv Analytics

Sign in

arXiv:2310.08320 [cs.LG]AbstractReferencesReviewsResources

Defending Our Privacy With Backdoors

Dominik Hintersdorf, Lukas Struppek, Daniel Neider, Kristian Kersting

Published 2023-10-12Version 1

The proliferation of large AI models trained on uncurated, often sensitive web-scraped data has raised significant privacy concerns. One of the concerns is that adversaries can extract information about the training data using privacy attacks. Unfortunately, the task of removing specific information from the models without sacrificing performance is not straightforward and has proven to be challenging. We propose a rather easy yet effective defense based on backdoor attacks to remove private information such as names of individuals from models, and focus in this work on text encoders. Specifically, through strategic insertion of backdoors, we align the embeddings of sensitive phrases with those of neutral terms-"a person" instead of the person's name. Our empirical results demonstrate the effectiveness of our backdoor-based defense on CLIP by assessing its performance using a specialized privacy attack for zero-shot classifiers. Our approach provides not only a new "dual-use" perspective on backdoor attacks, but also presents a promising avenue to enhance the privacy of individuals within models trained on uncurated web-scraped data.

Related articles: Most relevant | Search more
arXiv:2406.13073 [cs.LG] (Published 2024-06-18)
NoiSec: Harnessing Noise for Security against Adversarial and Backdoor Attacks
arXiv:2108.11299 [cs.LG] (Published 2021-08-25)
Backdoor Attacks on Network Certification via Data Poisoning
arXiv:2310.05862 [cs.LG] (Published 2023-10-05, updated 2024-06-10)
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks