arXiv Analytics

Sign in

arXiv:1912.00874 [stat.ML]AbstractReferencesReviewsResources

Implicit Priors for Knowledge Sharing in Bayesian Neural Networks

Jack K Fitzsimons, Sebastian M Schmon, Stephen J Roberts

Published 2019-12-02Version 1

Bayesian interpretations of neural network have a long history, dating back to early work in the 1990's and have recently regained attention because of their desirable properties like uncertainty estimation, model robustness and regularisation. We want to discuss here the application of Bayesian models to knowledge sharing between neural networks. Knowledge sharing comes in different facets, such as transfer learning, model distillation and shared embeddings. All of these tasks have in common that learned "features" ought to be shared across different networks. Theoretically rooted in the concepts of Bayesian neural networks this work has widespread application to general deep learning.

Comments: 5 pages, 2 figures
Journal: 4th workshop on Bayesian Deep Learning (NeurIPS 2019)
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2305.00934 [stat.ML] (Published 2023-05-01)
Variational Inference for Bayesian Neural Networks under Model and Parameter Uncertainty
arXiv:2006.12024 [stat.ML] (Published 2020-06-22)
Bayesian Neural Networks: An Introduction and Survey
arXiv:2008.08044 [stat.ML] (Published 2020-08-18)
Bayesian neural networks and dimensionality reduction