arXiv Analytics

Sign in

arXiv:2002.11531 [stat.ML]AbstractReferencesReviewsResources

A general framework for ensemble distribution distillation

Jakob Lindqvist, Amanda Olmin, Fredrik Lindsten, Lennart Svensson

Published 2020-02-26Version 1

Ensembles of neural networks have been shown to give better performance than single networks, both in terms of predictions and uncertainty estimation. Additionally, ensembles allow the uncertainty to be decomposed into aleatoric (data) and epistemic (model) components, giving a more complete picture of the predictive uncertainty. Ensemble distillation is the process of compressing an ensemble into a single model, often resulting in a leaner model that still outperforms the individual ensemble members. Unfortunately, standard distillation erases the natural uncertainty decomposition of the ensemble. We present a general framework for distilling both regression and classification ensembles in a way that preserves the decomposition. We demonstrate the desired behaviour of our framework and show that its predictive performance is on par with standard distillation.

Related articles: Most relevant | Search more
arXiv:1905.00076 [stat.ML] (Published 2019-04-30)
Ensemble Distribution Distillation
arXiv:2310.19384 [stat.ML] (Published 2023-10-30)
Deep anytime-valid hypothesis testing
arXiv:2306.00091 [stat.ML] (Published 2023-05-31)
A General Framework for Equivariant Neural Networks on Reductive Lie Groups