arXiv Analytics

Sign in

arXiv:2008.08400 [stat.ML]AbstractReferencesReviewsResources

Improving predictions of Bayesian neural networks via local linearization

Alexander Immer, Maciej Korzepa, Matthias Bauer

Published 2020-08-19Version 1

In this paper we argue that in Bayesian deep learning, the frequently utilized generalized Gauss-Newton (GGN) approximation should be understood as a modification of the underlying probabilistic model and should be considered separately from further approximate inference techniques. Applying the GGN approximation turns a BNN into a locally linearized generalized linear model or, equivalently, a Gaussian process. Because we then use this linearized model for inference, we should also predict using this modified likelihood rather than the original BNN likelihood. This formulation extends previous results to general likelihoods and alleviates underfitting behaviour observed e.g. by Ritter et al. (2018). We demonstrate our approach on several UCI classification datasets as well as CIFAR10.

Comments: ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2501.11773 [stat.ML] (Published 2025-01-20)
Can Bayesian Neural Networks Make Confident Predictions?
arXiv:1907.02610 [stat.ML] (Published 2019-07-04)
Adversarial Robustness through Local Linearization
Chongli Qin et al.
arXiv:2103.00222 [stat.ML] (Published 2021-02-27)
Variational Laplace for Bayesian neural networks