arXiv Analytics

Sign in

arXiv:2103.00222 [stat.ML]AbstractReferencesReviewsResources

Variational Laplace for Bayesian neural networks

Ali Unlu, Laurence Aitchison

Published 2021-02-27Version 1

We develop variational Laplace for Bayesian neural networks (BNNs) which exploits a local approximation of the curvature of the likelihood to estimate the ELBO without the need for stochastic sampling of the neural-network weights. Variational Laplace performs better on image classification tasks than MAP inference and far better than standard variational inference with stochastic sampling despite using the same mean-field Gaussian approximate posterior. The Variational Laplace objective is simple to evaluate, as it is (in essence) the log-likelihood, plus weight-decay, plus a squared-gradient regularizer. Finally, we emphasise care needed in benchmarking standard VI as there is a risk of stopping before the variance parameters have converged. We show that early-stopping can be avoided by increasing the learning rate for the variance parameters.

Related articles: Most relevant | Search more
arXiv:1903.07594 [stat.ML] (Published 2019-03-18)
Combining Model and Parameter Uncertainty in Bayesian Neural Networks
arXiv:2008.08400 [stat.ML] (Published 2020-08-19)
Improving predictions of Bayesian neural networks via local linearization
arXiv:1912.00874 [stat.ML] (Published 2019-12-02)
Implicit Priors for Knowledge Sharing in Bayesian Neural Networks