arXiv Analytics

Sign in

arXiv:1810.05546 [stat.ML]AbstractReferencesReviewsResources

Uncertainty in Neural Networks: Bayesian Ensembling

Tim Pearce, Mohamed Zaki, Alexandra Brintrup, Andy Neel

Published 2018-10-12Version 1

Understanding the uncertainty of a neural network's (NN) predictions is essential for many applications. The Bayesian framework provides a principled approach to this, however applying it to NNs is challenging due to the large number of parameters and data. Ensembling NNs provides a practical and scalable method for uncertainty quantification. Its drawback is that its justification is heuristic rather than Bayesian. In this work we propose one modification to the usual ensembling process, that does result in Bayesian behaviour: regularising parameters about values drawn from a prior distribution. Hence, we present an easily implementable, scalable technique for performing approximate Bayesian inference in NNs.

Related articles: Most relevant | Search more
arXiv:2007.12826 [stat.ML] (Published 2020-07-25)
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training
arXiv:2205.08609 [stat.ML] (Published 2022-05-17)
Bagged Polynomial Regression and Neural Networks
arXiv:1907.00825 [stat.ML] (Published 2019-07-01)
Time-to-Event Prediction with Neural Networks and Cox Regression