{ "id": "2006.10293", "version": "v1", "published": "2020-06-18T06:11:28.000Z", "updated": "2020-06-18T06:11:28.000Z", "title": "GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models", "authors": [ "Farzan Farnia", "William Wang", "Subhro Das", "Ali Jadbabaie" ], "categories": [ "cs.LG", "stat.ML" ], "abstract": "Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this paper, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a non-convex concave minimax optimization problem. We show that a Gradient Descent Ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show this stationary point recovers the true parameters of the underlying GMM. We numerically support our theoretical findings by performing several experiments, which demonstrate that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.", "revisions": [ { "version": "v1", "updated": "2020-06-18T06:11:28.000Z" } ], "analyses": { "keywords": [ "gaussian mixture models", "generative adversarial training", "non-convex concave minimax optimization problem", "zero-sum game", "gans achieve great success" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }