arXiv Analytics

Sign in

arXiv:2103.01678 [stat.ML]AbstractReferencesReviewsResources

Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance)

Jan Stanczuk, Christian Etmann, Lisa Maria Kreusser, Carola-Bibiane Schonlieb

Published 2021-03-02Version 1

Wasserstein GANs are based on the idea of minimising the Wasserstein distance between a real and a generated distribution. We provide an in-depth mathematical analysis of differences between the theoretical setup and the reality of training Wasserstein GANs. In this work, we gather both theoretical and empirical evidence that the WGAN loss is not a meaningful approximation of the Wasserstein distance. Moreover, we argue that the Wasserstein distance is not even a desirable loss function for deep generative models, and conclude that the success of Wasserstein GANs can in truth be attributed to a failure to approximate the Wasserstein distance.

Related articles: Most relevant | Search more
arXiv:1906.02994 [stat.ML] (Published 2019-06-07)
Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality
arXiv:2010.13064 [stat.ML] (Published 2020-10-25)
Further Analysis of Outlier Detection with Deep Generative Models
arXiv:2411.04216 [stat.ML] (Published 2024-11-06)
Debiasing Synthetic Data Generated by Deep Generative Models