arXiv Analytics

Sign in

arXiv:1703.02000 [cs.LG]AbstractReferencesReviewsResources

Generative Adversarial Nets with Labeled Data by Activation Maximization

Zhiming Zhou, Shu Rong, Han Cai, Weinan Zhang, Yong Yu, Jun Wang

Published 2017-03-06Version 1

In this paper, we study the impact and role of multi-class labels on adversarial training for generative adversarial nets (GANs). Our derivation of the gradient shows that the current GAN model with labeled data still results in undesirable properties due to the overlay of the gradients from multiple classes. We thus argue that a better gradient should follow the intensity and direction that maximize each sample's activation on one and the only one class in each iteration, rather than weighted-averaging their gradients. We show, mathematically, that the proposed activation-maximized adversarial training (AM-GAN) is a general one covering two major complementary solutions exploring labeled information. Additionally, we investigate related metrics for evaluating generative models. Empirical experiments show that our approach has achieved the best Inception score (8.34) compared with previously reported results. Moreover, our adversarial training produces faster convergence with no mode collapse observed.

Related articles: Most relevant | Search more
arXiv:1705.08395 [cs.LG] (Published 2017-05-23)
Continual Learning in Generative Adversarial Nets
arXiv:1806.02920 [cs.LG] (Published 2018-06-07)
GAIN: Missing Data Imputation using Generative Adversarial Nets
arXiv:2308.16316 [cs.LG] (Published 2023-08-30)
Ten Years of Generative Adversarial Nets (GANs): A survey of the state-of-the-art