arXiv:1704.03817 [cs.LG]AbstractReferencesReviewsResources
MAGAN: Margin Adaptation for Generative Adversarial Networks
Ruohan Wang, Antoine Cully, Hyung Jin Chang, Yiannis Demiris
Published 2017-04-12Version 1
We propose a novel training procedure for Generative Adversarial Networks (GANs) to improve stability and performance by using an adaptive hinge loss objective function. We estimate the appropriate hinge loss margin with the expected energy of the target distribution, and derive both a principled criterion for updating the margin and an approximate convergence measure. The resulting training procedure is simple yet robust on a diverse set of datasets. We evaluate the proposed training procedure on the task of unsupervised image generation, noting both qualitative and quantitative performance improvements.
Related articles: Most relevant | Search more
arXiv:1906.04848 [cs.LG] (Published 2019-06-11)
A Closer Look at the Optimization Landscapes of Generative Adversarial Networks
arXiv:2007.11133 [cs.LG] (Published 2020-07-21)
Unsupervised Learning of Solutions to Differential Equations with Generative Adversarial Networks
arXiv:2006.05891 [cs.LG] (Published 2020-06-10)
On Noise Injection in Generative Adversarial Networks