arXiv Analytics

Sign in

arXiv:2006.10293 [cs.LG]AbstractReferencesReviewsResources

GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models

Farzan Farnia, William Wang, Subhro Das, Ali Jadbabaie

Published 2020-06-18Version 1

Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this paper, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We numerically support our theoretical findings by performing several experiments, which demonstrate that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.

Related articles: Most relevant | Search more
arXiv:2007.08133 [cs.LG] (Published 2020-07-16)
Overcomplete order-3 tensor decomposition, blind deconvolution and Gaussian mixture models
arXiv:2010.13388 [cs.LG] (Published 2020-10-26)
A Novel Classification Approach for Credit Scoring based on Gaussian Mixture Models
arXiv:2206.08598 [cs.LG] (Published 2022-06-17)
On the Influence of Enforcing Model Identifiability on Learning dynamics of Gaussian Mixture Models