arXiv Analytics

Sign in

arXiv:2402.15374 [cs.CV]AbstractReferencesReviewsResources

Outlier detection by ensembling uncertainty with negative objectness

Anja Delić, Matej Grcić, Siniša Šegvić

Published 2024-02-23, updated 2024-06-10Version 2

Outlier detection is an essential capability in safety-critical applications of supervised visual recognition. Most of the existing methods deliver best results by encouraging standard closed-set models to produce low-confidence predictions in negative training data. However, that approach conflates prediction uncertainty with recognition of the negative class. We therefore reconsider direct prediction of K+1 logits that correspond to K groundtruth classes and one outlier class. This setup allows us to formulate a novel anomaly score as an ensemble of in-distribution uncertainty and the posterior of the outlier class which we term negative objectness. Now outliers can be independently detected due to i) high prediction uncertainty or ii) similarity with negative data. We embed our method into a dense prediction architecture with mask-level recognition over K+2 classes. The training procedure encourages the novel K+2-th class to learn negative objectness at pasted negative instances. Our models outperform the current state-of-the art on standard benchmarks for image-wide and pixel-level outlier detection with and without training on real negative data.

Related articles: Most relevant | Search more
arXiv:2105.09270 [cs.CV] (Published 2021-05-19)
Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
arXiv:2310.06085 [cs.CV] (Published 2023-08-20)
Quantile-based Maximum Likelihood Training for Outlier Detection
arXiv:1411.6850 [cs.CV] (Published 2014-11-25)
Similarity- based approach for outlier detection