arXiv Analytics

Sign in

arXiv:1808.06664 [stat.ML]AbstractReferencesReviewsResources

Out-of-Distribution Detection using Multiple Semantic Label Representations

Gabi Shalev, Yossi Adi, Joseph Keshet

Published 2018-08-20Version 1

Deep Neural Networks are powerful models that attained remarkable results on a variety of tasks. These models are shown to be extremely efficient when training and test data are drawn from the same distribution. However, it is not clear how a network will act when it is fed with an out-of-distribution example. In this work, we consider the problem of out-of-distribution detection in neural networks. We propose to use multiple semantic dense representations instead of sparse representation as the target label. Specifically, we propose to use several word representations obtained from different corpora or architectures as target labels. We evaluated the proposed model on computer vision, and speech commands detection tasks and compared it to previous methods. Results suggest that our method compares favorably with previous work. Besides, we present the efficiency of our approach for detecting wrongly classified and adversarial examples.

Related articles: Most relevant | Search more
arXiv:1802.04865 [stat.ML] (Published 2018-02-13)
Learning Confidence for Out-of-Distribution Detection in Neural Networks
arXiv:2102.12959 [stat.ML] (Published 2021-02-24)
A statistical theory of out-of-distribution detection
arXiv:2406.16045 [stat.ML] (Published 2024-06-23)
Combine and Conquer: A Meta-Analysis on Data Shift and Out-of-Distribution Detection