arXiv Analytics

Sign in

arXiv:2305.08694 [cs.CV]AbstractReferencesReviewsResources

A Reproducible Extraction of Training Images from Diffusion Models

Ryan Webster

Published 2023-05-15Version 1

Recently, Carlini et al. demonstrated the widely used model Stable Diffusion can regurgitate real training samples, which is troublesome from a copyright perspective. In this work, we provide an efficient extraction attack on par with the recent attack, with several order of magnitudes less network evaluations. In the process, we expose a new phenomena, which we dub template verbatims, wherein a diffusion model will regurgitate a training sample largely in tact. Template verbatims are harder to detect as they require retrieval and masking to correctly label. Furthermore, they are still generated by newer systems, even those which de-duplicate their training set, and we give insight into why they still appear during generation. We extract training images from several state of the art systems, including Stable Diffusion 2.0, Deep Image Floyd, and finally Midjourney v4. We release code to verify our extraction attack, perform the attack, as well as all extracted prompts at \url{https://github.com/ryanwebster90/onestep-extraction}.

Related articles: Most relevant | Search more
arXiv:2307.00773 [cs.CV] (Published 2023-07-03)
DifFSS: Diffusion Model for Few-Shot Semantic Segmentation
arXiv:2305.13840 [cs.CV] (Published 2023-05-23)
Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models
Weifeng Chen et al.
arXiv:2308.06160 [cs.CV] (Published 2023-08-11)
DatasetDM: Synthesizing Data with Perception Annotations Using Diffusion Models
Weijia Wu et al.