arXiv Analytics

Sign in

arXiv:1806.04465 [stat.ML]AbstractReferencesReviewsResources

Gaussian mixture models with Wasserstein distance

Benoit Gaujac, Ilya Feige, David Barber

Published 2018-06-12Version 1

Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent being under leveraged. In this paper, we show that such models are more amenable to training when using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.

Related articles: Most relevant | Search more
arXiv:2006.10325 [stat.ML] (Published 2020-06-18)
When OT meets MoM: Robust estimation of Wasserstein Distance
arXiv:2310.12806 [stat.ML] (Published 2023-10-19)
DCSI -- An improved measure of cluster separability based on separation and connectedness
arXiv:2202.06930 [stat.ML] (Published 2022-02-14)
Tensor Moments of Gaussian Mixture Models: Theory and Applications