arXiv Analytics

Sign in

arXiv:2409.01322 [cs.CV]AbstractReferencesReviewsResources

Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing

Vadim Titov, Madina Khalmatova, Alexandra Ivanova, Dmitry Vetrov, Aibek Alanov

Published 2024-09-02, updated 2024-09-09Version 2

Despite recent advances in large-scale text-to-image generative models, manipulating real images with these models remains a challenging problem. The main limitations of existing editing methods are that they either fail to perform with consistent quality on a wide range of image edits or require time-consuming hyperparameter tuning or fine-tuning of the diffusion model to preserve the image-specific appearance of the input image. We propose a novel approach that is built upon a modified diffusion sampling process via the guidance mechanism. In this work, we explore the self-guidance technique to preserve the overall structure of the input image and its local regions appearance that should not be edited. In particular, we explicitly introduce layout-preserving energy functions that are aimed to save local and global structures of the source image. Additionally, we propose a noise rescaling mechanism that allows to preserve noise distribution by balancing the norms of classifier-free guidance and our proposed guiders during generation. Such a guiding approach does not require fine-tuning the diffusion model and exact inversion process. As a result, the proposed method provides a fast and high-quality editing mechanism. In our experiments, we show through human evaluation and quantitative analysis that the proposed method allows to produce desired editing which is more preferable by humans and also achieves a better trade-off between editing quality and preservation of the original image. Our code is available at https://github.com/FusionBrainLab/Guide-and-Rescale.

Comments: Accepted to ECCV 2024. The project page is available at https://fusionbrainlab.github.io/Guide-and-Rescale
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2307.14073 [cs.CV] (Published 2023-07-26)
VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet
arXiv:2309.07254 [cs.CV] (Published 2023-09-13)
Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement
arXiv:2211.08332 [cs.CV] (Published 2022-11-15)
Versatile Diffusion: Text, Images and Variations All in One Diffusion Model