arXiv Analytics

Sign in

arXiv:2305.13625 [cs.CV]AbstractReferencesReviewsResources

DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection

Jiang Liu, Chun Pong Lau, Rama Chellappa

Published 2023-05-23Version 1

The increasingly pervasive facial recognition (FR) systems raise serious concerns about personal privacy, especially for billions of users who have publicly shared their photos on social media. Several attempts have been made to protect individuals from being identified by unauthorized FR systems utilizing adversarial attacks to generate encrypted face images. However, existing methods suffer from poor visual quality or low attack success rates, which limit their utility. Recently, diffusion models have achieved tremendous success in image generation. In this work, we ask: can diffusion models be used to generate adversarial examples to improve both visual quality and attack performance? We propose DiffProtect, which utilizes a diffusion autoencoder to generate semantically meaningful perturbations on FR systems. Extensive experiments demonstrate that DiffProtect produces more natural-looking encrypted images than state-of-the-art methods while achieving significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.

Related articles: Most relevant | Search more
arXiv:2302.04578 [cs.CV] (Published 2023-02-09)
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples
arXiv:2308.15692 [cs.CV] (Published 2023-08-30)
Intriguing Properties of Diffusion Models: A Large-Scale Dataset for Evaluating Natural Attack Capability in Text-to-Image Generative Models
arXiv:2211.07804 [cs.CV] (Published 2022-11-14)
Diffusion Models for Medical Image Analysis: A Comprehensive Survey