arXiv Analytics

Sign in

arXiv:2502.02225 [cs.CV]AbstractReferencesReviewsResources

Exploring the latent space of diffusion models directly through singular value decomposition

Li Wang, Boyan Gao, Yanran Li, Zhao Wang, Xiaosong Yang, David A. Clifton, Jun Xiao

Published 2025-02-04Version 1

Despite the groundbreaking success of diffusion models in generating high-fidelity images, their latent space remains relatively under-explored, even though it holds significant promise for enabling versatile and interpretable image editing capabilities. The complicated denoising trajectory and high dimensionality of the latent space make it extremely challenging to interpret. Existing methods mainly explore the feature space of U-Net in Diffusion Models (DMs) instead of the latent space itself. In contrast, we directly investigate the latent space via Singular Value Decomposition (SVD) and discover three useful properties that can be used to control generation results without the requirements of data collection and maintain identity fidelity generated images. Based on these properties, we propose a novel image editing framework that is capable of learning arbitrary attributes from one pair of latent codes destined by text prompts in Stable Diffusion Models. To validate our approach, extensive experiments are conducted to demonstrate its effectiveness and flexibility in image editing. We will release our codes soon to foster further research and applications in this area.

Related articles: Most relevant | Search more
arXiv:2306.00354 [cs.CV] (Published 2023-06-01)
Addressing Negative Transfer in Diffusion Models
arXiv:2310.08442 [cs.CV] (Published 2023-10-12)
Debias the Training of Diffusion Models
arXiv:2302.04578 [cs.CV] (Published 2023-02-09)
Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples