arXiv Analytics

Sign in

arXiv:1802.04865 [stat.ML]AbstractReferencesReviewsResources

Learning Confidence for Out-of-Distribution Detection in Neural Networks

Terrance DeVries, Graham W. Taylor

Published 2018-02-13Version 1

Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples.

Related articles: Most relevant | Search more
arXiv:2102.12959 [stat.ML] (Published 2021-02-24)
A statistical theory of out-of-distribution detection
arXiv:2406.16045 [stat.ML] (Published 2024-06-23)
Combine and Conquer: A Meta-Analysis on Data Shift and Out-of-Distribution Detection
arXiv:1808.06664 [stat.ML] (Published 2018-08-20)
Out-of-Distribution Detection using Multiple Semantic Label Representations