arXiv Analytics

Sign in

arXiv:1706.08224 [cs.LG]AbstractReferencesReviewsResources

Do GANs actually learn the distribution? An empirical study

Sanjeev Arora, Yi Zhang

Published 2017-06-26Version 1

Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of (Goodfellow et al 2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al (to appear at ICML 2017) raised doubts whether the same holds when discriminator has finite size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support ---in other words, the training objective is unable to prevent mode collapse. The current note reports experiments suggesting that such problems are not merely theoretical. It presents empirical evidence that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution. The main technical contribution is a new proposed test, based upon the famous birthday paradox, for estimating the support size of the generated distribution.

Related articles: Most relevant | Search more
arXiv:2002.09779 [cs.LG] (Published 2020-02-22)
Stochasticity in Neural ODEs: An Empirical Study
arXiv:2206.13190 [cs.LG] (Published 2022-06-27)
An Empirical Study of Personalized Federated Learning
arXiv:1806.07755 [cs.LG] (Published 2018-06-19)
An empirical study on evaluation metrics of generative adversarial networks