arXiv Analytics

Sign in

arXiv:1903.09215 [stat.ML]AbstractReferencesReviewsResources

Empirical confidence estimates for classification by deep neural networks

Chris Finlay, Adam M. Oberman

Published 2019-03-21Version 1

How well can we estimate the probability that the classification, $C(f(x))$, predicted by a deep neural network is correct (or in the Top 5)? We consider the case of a classification neural network trained with the KL divergence which is assumed to generalize, as measured empirically by the test error and test loss. We present conditional probabilities for predictions based on the histogram of uncertainty metrics, which have a significant Bayes ratio. Previous work in this area includes Bayesian neural networks. Our metric is twice as predictive, based on the expected Bayes ratio, on ImageNet compared to our best tuned implementation of Bayesian dropout~\cite{gal2016dropout}. Our method uses just the softmax values and a stored histogram so it is essentially free to compute, compared to many times inference cost for Bayesian dropout.

Related articles: Most relevant | Search more
arXiv:2103.00222 [stat.ML] (Published 2021-02-27)
Variational Laplace for Bayesian neural networks
arXiv:1606.05018 [stat.ML] (Published 2016-06-16)
Improving Power Generation Efficiency using Deep Neural Networks
arXiv:1607.00485 [stat.ML] (Published 2016-07-02)
Group Sparse Regularization for Deep Neural Networks