arXiv Analytics

Sign in

arXiv:2211.12445 [cs.CV]AbstractReferencesReviewsResources

SinDiffusion: Learning a Diffusion Model from a Single Natural Image

Weilun Wang, Jianmin Bao, Wengang Zhou, Dongdong Chen, Dong Chen, Lu Yuan, Houqiang Li

Published 2022-11-22Version 1

We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image. SinDiffusion significantly improves the quality and diversity of generated samples compared with existing GAN-based approaches. It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales which serves as the default setting in prior work. This avoids the accumulation of errors, which cause characteristic artifacts in generated results. Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics, therefore we redesign the network structure of the diffusion model. Coupling these two designs enables us to generate photorealistic and diverse images from a single image. Furthermore, SinDiffusion can be applied to various applications, i.e., text-guided image generation, and image outpainting, due to the inherent capability of diffusion models. Extensive experiments on a wide range of images demonstrate the superiority of our proposed method for modeling the patch distribution.

Related articles: Most relevant | Search more
arXiv:1510.07136 [cs.CV] (Published 2015-10-24)
Image Parsing with a Wide Range of Classes and Scene-Level Context
arXiv:1905.01164 [cs.CV] (Published 2019-05-02)
SinGAN: Learning a Generative Model from a Single Natural Image
arXiv:2212.08013 [cs.CV] (Published 2022-12-15)
FlexiViT: One Model for All Patch Sizes
Lucas Beyer et al.