arXiv Analytics

Sign in

arXiv:1905.06076 [stat.ML]AbstractReferencesReviewsResources

Expressive Priors in Bayesian Neural Networks: Kernel Combinations and Periodic Functions

Tim Pearce, Mohamed Zaki, Alexandra Brintrup, Andy Neely

Published 2019-05-15Version 1

A simple, flexible approach to creating expressive priors in Gaussian process (GP) models makes new kernels from a combination of basic kernels, e.g. summing a periodic and linear kernel can capture seasonal variation with a long term trend. Despite a well-studied link between GPs and Bayesian neural networks (BNNs), the BNN analogue of this has not yet been explored. This paper derives BNN architectures mirroring such kernel combinations. Furthermore, it shows how BNNs can produce periodic kernels, which are often useful in this context. These ideas provide a principled approach to designing BNNs that incorporate prior knowledge about a function. We showcase the practical value of these ideas with illustrative experiments in supervised and reinforcement learning settings.

Comments: Accepted to Uncertainty in Artificial Intelligence (UAI) 2019
Categories: stat.ML, cs.AI, cs.LG
Related articles: Most relevant | Search more
arXiv:2401.00611 [stat.ML] (Published 2023-12-31)
A Compact Representation for Bayesian Neural Networks By Removing Permutation Symmetry
arXiv:2304.02595 [stat.ML] (Published 2023-04-02)
Bayesian neural networks via MCMC: a Python-based tutorial
arXiv:2207.08200 [stat.ML] (Published 2022-07-17)
Uncertainty Calibration in Bayesian Neural Networks via Distance-Aware Priors