arXiv Analytics

Sign in

arXiv:1806.07755 [cs.LG]AbstractReferencesReviewsResources

An empirical study on evaluation metrics of generative adversarial networks

Qiantong Xu, Gao Huang, Yang Yuan, Chuan Guo, Yu Sun, Felix Wu, Kilian Weinberger

Published 2018-06-19Version 1

Evaluating generative adversarial networks (GANs) is inherently challenging. In this paper, we revisit several representative sample-based evaluation metrics for GANs, and address the problem of how to evaluate the evaluation metrics. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. With a series of carefully designed experiments, we comprehensively investigate existing sample-based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1-Nearest-Neighbor (1-NN) two-sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far they are from learning the target distribution.

Comments: arXiv admin note: text overlap with arXiv:1802.03446 by other authors
Categories: cs.LG, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:2007.08428 [cs.LG] (Published 2020-07-16)
An Empirical Study on the Robustness of NAS based Architectures
arXiv:1911.04120 [cs.LG] (Published 2019-11-11)
An empirical study of the relation between network architecture and complexity
arXiv:2010.13365 [cs.LG] (Published 2020-10-26)
Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy