arXiv Analytics

Sign in

arXiv:2302.04578 [cs.CV]AbstractReferencesReviewsResources

Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples

Chumeng Liang, Xiaoyu Wu, Yang Hua, Jiaru Zhang, Yiming Xue, Tao Song, Zhengui Xue, Ruhui Ma, Haibing Guan

Published 2023-02-09Version 1

Diffusion Models (DMs) achieve state-of-the-art performance in generative tasks, boosting a wave in AI for Art. Despite the success of commercialization, DMs meanwhile provide tools for copyright violations, where infringers benefit from illegally using paintings created by human artists to train DMs and generate novel paintings in a similar style. In this paper, we show that it is possible to create an image $x'$ that is similar to an image $x$ for human vision but unrecognizable for DMs. We build a framework to define and evaluate this adversarial example for diffusion models. Based on the framework, we further propose AdvDM, an algorithm to generate adversarial examples for DMs. By optimizing upon different latent variables sampled from the reverse process of DMs, AdvDM conducts a Monte-Carlo estimation of adversarial examples for DMs. Extensive experiments show that the estimated adversarial examples can effectively hinder DMs from extracting their features. Our method can be a powerful tool for human artists to protect their copyright against infringers with DM-based AI-for-Art applications.

Related articles: Most relevant | Search more
arXiv:2305.13625 [cs.CV] (Published 2023-05-23)
DiffProtect: Generate Adversarial Examples with Diffusion Models for Facial Privacy Protection
arXiv:2306.00354 [cs.CV] (Published 2023-06-01)
Addressing Negative Transfer in Diffusion Models
arXiv:2305.19947 [cs.CV] (Published 2023-05-31)
A Geometric Perspective on Diffusion Models