arXiv Analytics

Sign in

arXiv:1711.08911 [stat.ML]AbstractReferencesReviewsResources

Computing the quality of the Laplace approximation

Guillaume P. Dehaene

Published 2017-11-24Version 1

Bayesian inference requires approximation methods to become computable, but for most of them it is impossible to quantify how close the approximation is to the true posterior. In this work, we present a theorem upper-bounding the KL divergence between a log-concave target density $f\left(\boldsymbol{\theta}\right)$ and its Laplace approximation $g\left(\boldsymbol{\theta}\right)$. The bound we present is computable: on the classical logistic regression model, we find our bound to be almost exact as long as the dimensionality of the parameter space is high. The approach we followed in this work can be extended to other Gaussian approximations, as we will do in an extended version of this work, to be submitted to the Annals of Statistics. It will then become a critical tool for characterizing whether, for a given problem, a given Gaussian approximation is suitable, or whether a more precise alternative method should be used instead.

Comments: Advances in Approximate Bayesian Inference NIPS 2017 Workshop
Categories: stat.ML, math.ST, stat.TH
Related articles: Most relevant | Search more
arXiv:2203.07755 [stat.ML] (Published 2022-03-15)
Generative models and Bayesian inversion using Laplace approximation
arXiv:1901.04791 [stat.ML] (Published 2019-01-15)
Mixed Variational Inference
arXiv:2502.06719 [stat.ML] (Published 2025-02-10)
Gaussian Approximation and Multiplier Bootstrap for Stochastic Gradient Descent