arXiv Analytics

Sign in

arXiv:2002.04724 [stat.ML]AbstractReferencesReviewsResources

Improved Consistency Regularization for GANs

Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, Han Zhang

Published 2020-02-11Version 1

Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator. We improve on this technique in several ways. We first show that consistency regularization can introduce artifacts into the GAN samples and explain how to fix this issue. We then propose several modifications to the consistency regularization procedure designed to improve its performance. We carry out extensive experiments quantifying the benefit of our improvements. For unconditional image synthesis on CIFAR-10 and CelebA, our modifications yield the best known FID scores on various GAN architectures. For conditional image synthesis on CIFAR-10, we improve the state-of-the-art FID score from 11.48 to 9.21. Finally, on ImageNet-2012, we apply our technique to the original BigGAN model and improve the FID from 6.66 to 5.38, which is the best score at that model size.

Related articles: Most relevant | Search more
arXiv:2012.10410 [stat.ML] (Published 2020-12-18)
Convergence dynamics of Generative Adversarial Networks: the dual metric flows
arXiv:1406.2661 [stat.ML] (Published 2014-06-10)
Generative Adversarial Networks
arXiv:2101.08367 [stat.ML] (Published 2021-01-20)
Influence Estimation for Generative Adversarial Networks