arXiv Analytics

Sign in

arXiv:2107.04589 [cs.CV]AbstractReferencesReviewsResources

ViTGAN: Training GANs with Vision Transformers

Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, Ce Liu

Published 2021-07-09Version 1

Recently, Vision Transformers (ViTs) have shown competitive performance on image recognition while requiring less vision-specific inductive biases. In this paper, we investigate if such observation can be extended to image generation. To this end, we integrate the ViT architecture into generative adversarial networks (GANs). We observe that existing regularization methods for GANs interact poorly with self-attention, causing serious instability during training. To resolve this issue, we introduce novel regularization techniques for training GANs with ViTs. Empirically, our approach, named ViTGAN, achieves comparable performance to state-of-the-art CNN-based StyleGAN2 on CIFAR-10, CelebA, and LSUN bedroom datasets.

Related articles: Most relevant | Search more
arXiv:2207.03041 [cs.CV] (Published 2022-07-07)
Vision Transformers: State of the Art and Research Challenges
arXiv:2403.08170 [cs.CV] (Published 2024-03-13)
Versatile Defense Against Adversarial Attacks on Image Recognition
arXiv:2203.01726 [cs.CV] (Published 2022-03-03)
Ensembles of Vision Transformers as a New Paradigm for Automated Classification in Ecology