arXiv Analytics

Sign in

arXiv:1906.04848 [cs.LG]AbstractReferencesReviewsResources

A Closer Look at the Optimization Landscapes of Generative Adversarial Networks

Hugo Berard, Gauthier Gidel, Amjad Almahairi, Pascal Vincent, Simon Lacoste-Julien

Published 2019-06-11Version 1

Generative adversarial networks have been very successful in generative modeling, however they remain relatively hard to optimize compared to standard deep neural networks. In this paper, we try to gain insight into the optimization of GANs by looking at the game vector field resulting from the concatenation of the gradient of both players. Based on this point of view, we propose visualization techniques that allow us to make the following empirical observations. First, the training of GANs suffers from rotational behavior around locally stable stationary points, which, as we show, corresponds to the presence of imaginary components in the eigenvalues of the Jacobian of the game. Secondly, GAN training seems to converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance. This counter-intuitive yet persistent observation questions whether we actually need a Nash equilibrium to get good performance in GANs.

Related articles: Most relevant | Search more
arXiv:1809.03627 [cs.LG] (Published 2018-09-10)
ClusterGAN : Latent Space Clustering in Generative Adversarial Networks
arXiv:1704.03817 [cs.LG] (Published 2017-04-12)
MAGAN: Margin Adaptation for Generative Adversarial Networks
arXiv:1705.09367 [cs.LG] (Published 2017-05-25)
Stabilizing Training of Generative Adversarial Networks through Regularization