arXiv Analytics

Sign in

arXiv:1807.04015 [cs.LG]AbstractReferencesReviewsResources

On catastrophic forgetting and mode collapse in Generative Adversarial Networks

Hoang Thanh-Tung, Truyen Tran, Svetha Venkatesh

Published 2018-07-11Version 1

Generative Adversarial Networks (GAN) are one of the most prominent tools for learning complicated distributions. However, problems such as mode collapse and catastrophic forgetting, prevent GAN from learning the target distribution. These problems are usually studied independently from each other. In this paper, we show that both problems are present in GAN and their combined effect makes the training of GAN unstable. We also show that methods such as gradient penalties and momentum based optimizers can improve the stability of GAN by effectively preventing these problems from happening. Finally, we study a mechanism for mode collapse to occur and propagate in feedforward neural networks.

Comments: Accepted to ICML workshop on Theoretical Foundation and Applications of Deep Generative Models, Stockholm, Sweden, 2018
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1706.09884 [cs.LG] (Published 2017-06-29)
Towards Understanding the Dynamics of Generative Adversarial Networks
arXiv:1812.06571 [cs.LG] (Published 2018-12-17)
Latent Dirichlet Allocation in Generative Adversarial Networks
arXiv:1810.05221 [cs.LG] (Published 2018-10-11)
MDGAN: Boosting Anomaly Detection Using \\Multi-Discriminator Generative Adversarial Networks