arXiv Analytics

Sign in

arXiv:2008.06729 [cs.LG]AbstractReferencesReviewsResources

Reliable Uncertainties for Bayesian Neural Networks using Alpha-divergences

Hector J. Hortua, Luigi Malago, Riccardo Volpi

Published 2020-08-15Version 1

Bayesian Neural Networks (BNNs) often result uncalibrated after training, usually tending towards overconfidence. Devising effective calibration methods with low impact in terms of computational complexity is thus of central interest. In this paper we present calibration methods for BNNs based on the alpha divergences from Information Geometry. We compare the use of alpha divergence in training and in calibration, and we show how the use in calibration provides better calibrated uncertainty estimates for specific choices of alpha and is more efficient especially for complex network architectures. We empirically demonstrate the advantages of alpha calibration in regression problems involving parameter estimation and inferred correlations between output uncertainties.

Comments: Accepted at the ICML 2020: Workshop on Uncertainty and Robustness in Deep Learning
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2002.04359 [cs.LG] (Published 2020-02-11)
Robustness of Bayesian Neural Networks to Gradient-Based Attacks
arXiv:2011.05074 [cs.LG] (Published 2020-11-10)
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
arXiv:2306.10742 [cs.LG] (Published 2023-06-19)
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming