arXiv Analytics

Sign in

arXiv:1905.00076 [stat.ML]AbstractReferencesReviewsResources

Ensemble Distribution Distillation

Andrey Malinin, Bruno Mlodozeniec, Mark Gales

Published 2019-04-30Version 1

Ensemble of Neural Network (NN) models are known to yield improvements in accuracy. Furthermore, they have been empirically shown to yield robust measures of uncertainty, though without theoretical guarantees. However, ensembles come at high computational and memory cost, which may be prohibitive for certain application. There has been significant work done on the distillation of an ensemble into a single model. Such approaches decrease computational cost and allow a single model to achieve accuracy comparable to that of an ensemble. However, information about the \emph{diversity} of the ensemble, which can yield estimates of \emph{knowledge uncertainty}, is lost. Recently, a new class of models, called Prior Networks, has been proposed, which allows a single neural network to explicitly model a distribution over output distributions, effectively emulating an ensemble. In this work ensembles and Prior Networks are combined to yield a novel approach called \emph{Ensemble Distribution Distillation} (EnD$^2$), which allows distilling an ensemble into a single Prior Network. This allows a single model to retain both the improved classification performance as well as measures of diversity of the ensemble. In this initial investigation the properties of EnD$^2$ have been investigated and confirmed on an artificial dataset.

Related articles:
arXiv:2002.11531 [stat.ML] (Published 2020-02-26)
A general framework for ensemble distribution distillation
arXiv:1503.02531 [stat.ML] (Published 2015-03-09)
Distilling the Knowledge in a Neural Network
arXiv:1808.04888 [stat.ML] (Published 2018-08-14)
Skill Rating for Generative Models