arXiv Analytics

Sign in

arXiv:2106.07898 [stat.ML]AbstractReferencesReviewsResources

Divergence Frontiers for Generative Models: Sample Complexity, Quantization Level, and Frontier Integral

Lang Liu, Krishna Pillutla, Sean Welleck, Sewoong Oh, Yejin Choi, Zaid Harchaoui

Published 2021-06-15Version 1

The spectacular success of deep generative models calls for quantitative tools to measure their statistical performance. Divergence frontiers have recently been proposed as an evaluation framework for generative models, due to their ability to measure the quality-diversity trade-off inherent to deep generative modeling. However, the statistical behavior of divergence frontiers estimated from data remains unknown to this day. In this paper, we establish non-asymptotic bounds on the sample complexity of the plug-in estimator of divergence frontiers. Along the way, we introduce a novel integral summary of divergence frontiers. We derive the corresponding non-asymptotic bounds and discuss the choice of the quantization level by balancing the two types of approximation errors arisen from its computation. We also augment the divergence frontier framework by investigating the statistical performance of smoothed distribution estimators such as the Good-Turing estimator. We illustrate the theoretical results with numerical examples from natural language processing and computer vision.

Related articles: Most relevant | Search more
arXiv:1011.5395 [stat.ML] (Published 2010-11-24)
The Sample Complexity of Dictionary Learning
arXiv:2409.01243 [stat.ML] (Published 2024-09-02)
Sample Complexity of the Sign-Perturbed Sums Method
arXiv:2106.07148 [stat.ML] (Published 2021-06-14)
On the Sample Complexity of Learning with Geometric Stability