arXiv Analytics

Sign in

arXiv:1608.08814 [stat.CO]AbstractReferencesReviewsResources

Importance Sampling and Necessary Sample Size: an Information Theory Approach

Daniel Sanz-Alonso

Published 2016-08-31Version 1

Importance sampling approximates expectations with respect to a target measure by using samples from a proposal measure. The performance of the method over large classes of test functions depends heavily on the closeness between both measures. We derive a general bound that needs to hold for importance sampling to be successful, and relates the $f$-divergence between the target and the proposal to the sample size. The bound is deduced from a new and simple information theory paradigm for the study of importance sampling. As examples of the general theory we give necessary conditions on the sample size in terms of the Kullback-Leibler and $\chi^2$ divergences, and the total variation and Hellinger distances. Our approach is non-asymptotic, and its generality allows to tell apart the relative merits of these metrics. Unsurprisingly, the non-symmetric divergences give sharper bounds than total variation or Hellinger. Our results extend existing necessary conditions -and complement sufficient ones- on the sample size required for importance sampling.

Related articles: Most relevant | Search more
arXiv:1507.02646 [stat.CO] (Published 2015-07-09)
Very Good Importance Sampling
arXiv:2102.05407 [stat.CO] (Published 2021-02-10)
Advances in Importance Sampling
arXiv:1512.04743 [stat.CO] (Published 2015-12-15)
Model comparison with missing data using MCMC and importance sampling