arXiv Analytics

Sign in

arXiv:2411.15113 [cs.CV]AbstractReferencesReviewsResources

Efficient Pruning of Text-to-Image Models: Insights from Pruning Stable Diffusion

Samarth N Ramesh, Zhixue Zhao

Published 2024-11-22Version 1

As text-to-image models grow increasingly powerful and complex, their burgeoning size presents a significant obstacle to widespread adoption, especially on resource-constrained devices. This paper presents a pioneering study on post-training pruning of Stable Diffusion 2, addressing the critical need for model compression in text-to-image domain. Our study tackles the pruning techniques for the previously unexplored multi-modal generation models, and particularly examines the pruning impact on the textual component and the image generation component separately. We conduct a comprehensive comparison on pruning the model or the single component of the model in various sparsities. Our results yield previously undocumented findings. For example, contrary to established trends in language model pruning, we discover that simple magnitude pruning outperforms more advanced techniques in text-to-image context. Furthermore, our results show that Stable Diffusion 2 can be pruned to 38.5% sparsity with minimal quality loss, achieving a significant reduction in model size. We propose an optimal pruning configuration that prunes the text encoder to 47.5% and the diffusion generator to 35%. This configuration maintains image generation quality while substantially reducing computational requirements. In addition, our work uncovers intriguing questions about information encoding in text-to-image models: we observe that pruning beyond certain thresholds leads to sudden performance drops (unreadable images), suggesting that specific weights encode critical semantics information. This finding opens new avenues for future research in model compression, interoperability, and bias identification in text-to-image models. By providing crucial insights into the pruning behavior of text-to-image models, our study lays the groundwork for developing more efficient and accessible AI-driven image generation systems

Related articles: Most relevant | Search more
arXiv:2304.05390 [cs.CV] (Published 2023-04-11)
HRS-Bench: Holistic, Reliable and Scalable Benchmark for Text-to-Image Models
arXiv:2307.06949 [cs.CV] (Published 2023-07-13)
HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models
arXiv:2404.04243 [cs.CV] (Published 2024-04-05)
Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models