arXiv Analytics

Sign in

arXiv:1711.01744 [cs.LG]AbstractReferencesReviewsResources

KGAN: How to Break The Minimax Game in GAN

Trung Le, Tu Dinh Nguyen, Dinh Phung

Published 2017-11-06Version 1

Generative Adversarial Networks (GANs) were intuitively and attractively explained under the perspective of game theory, wherein two involving parties are a discriminator and a generator. In this game, the task of the discriminator is to discriminate the real and generated (i.e., fake) data, whilst the task of the generator is to generate the fake data that maximally confuses the discriminator. In this paper, we propose a new viewpoint for GANs, which is termed as the minimizing general loss viewpoint. This viewpoint shows a connection between the general loss of a classification problem regarding a convex loss function and a f-divergence between the true and fake data distributions. Mathematically, we proposed a setting for the classification problem of the true and fake data, wherein we can prove that the general loss of this classification problem is exactly the negative f-divergence for a certain convex function f. This allows us to interpret the problem of learning the generator for dismissing the f-divergence between the true and fake data distributions as that of maximizing the general loss which is equivalent to the min-max problem in GAN if the Logistic loss is used in the classification problem. However, this viewpoint strengthens GANs in two ways. First, it allows us to employ any convex loss function for the discriminator. Second, it suggests that rather than limiting ourselves in NN-based discriminators, we can alternatively utilize other powerful families. Bearing this viewpoint, we then propose using the kernel-based family for discriminators. This family has two appealing features: i) a powerful capacity in classifying non-linear nature data and ii) being convex in the feature space. Using the convexity of this family, we can further develop Fenchel duality to equivalently transform the max-min problem to the max-max dual problem.

Related articles: Most relevant | Search more
arXiv:1808.03591 [cs.LG] (Published 2018-08-10)
How Complex is your classification problem? A survey on measuring classification complexity
arXiv:cs/0311014 [cs.LG] (Published 2003-11-13)
Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet
arXiv:cs/0506041 [cs.LG] (Published 2005-06-11, updated 2005-09-02)
Competitive on-line learning with a convex loss function