arXiv Analytics

Sign in

arXiv:1810.03958 [cs.LG]AbstractReferencesReviewsResources

Fixing Variational Bayes: Deterministic Variational Inference for Bayesian Neural Networks

Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E. Turner, José Miguel Hernández-Lobato, Alexander L. Gaunt

Published 2018-10-09Version 1

Bayesian neural networks (BNNs) hold great promise as a flexible and principled solution to deal with uncertainty when learning from finite data. Among approaches to realize probabilistic inference in deep neural networks, variational Bayes (VB) is theoretically grounded, generally applicable, and computationally efficient. With wide recognition of potential advantages, why is it that variational Bayes has seen very limited practical use for BNNs in real applications? We argue that variational inference in neural networks is fragile: successful implementations require careful initialization and tuning of prior variances, as well as controlling the variance of Monte Carlo gradient estimates. We fix VB and turn it into a robust inference tool for Bayesian neural networks. We achieve this with two innovations: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel empirical Bayes procedure for automatically selecting prior variances. Combining these two innovations, the resulting method is highly efficient and robust. On the application of heteroscedastic regression we demonstrate strong predictive performance over alternative approaches.

Related articles: Most relevant | Search more
arXiv:2102.11062 [cs.LG] (Published 2021-02-22)
On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks
arXiv:2011.05074 [cs.LG] (Published 2020-11-10)
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
arXiv:2306.10742 [cs.LG] (Published 2023-06-19)
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming