arXiv Analytics

Sign in

arXiv:1812.06571 [cs.LG]AbstractReferencesReviewsResources

Latent Dirichlet Allocation in Generative Adversarial Networks

Lili Pan, Shen Cheng, Jian Liu, Yazhou Ren, Zenglin Xu

Published 2018-12-17Version 1

Mode collapse is one of the key challenges in the training of Generative Adversarial Networks(GANs). Previous approaches have tried to address this challenge either by changing the loss of GANs, or by modifying optimization strategies. We argue that it is more desirable if we can find the underlying structure of real data and build a structured generative model to further get rid of mode collapse. To this end, we propose Latent Dirichlet Allocation based Generative Adversarial Networks (LDAGAN), which have high capacity of modeling complex image data. Moreover, we optimize our model by combing variational expectation-maximization (EM) algorithm with adversarial learning. Stochastic optimization strategy ensures the training process of LDAGAN is not time consuming. Experimental results demonstrate our method outperforms the existing standard CNN based GANs on the task of image generation.

Related articles: Most relevant | Search more
arXiv:1905.12916 [cs.LG] (Published 2019-05-30)
Effective Medical Test Suggestions Using Deep Reinforcement Learning
arXiv:1704.05041 [cs.LG] (Published 2017-04-17)
Fast multi-output relevance vector regression
arXiv:1907.06065 [cs.LG] (Published 2019-07-13)
Bringing Giant Neural Networks Down to Earth with Unlabeled Data