{ "id": "2106.03004", "version": "v1", "published": "2021-06-06T01:45:11.000Z", "updated": "2021-06-06T01:45:11.000Z", "title": "Exploring the Limits of Out-of-Distribution Detection", "authors": [ "Stanislav Fort", "Jie Ren", "Balaji Lakshminarayanan" ], "comment": "S.F. and J.R. contributed equally", "categories": [ "cs.LG" ], "abstract": "Near out-of-distribution detection (OOD) is a major challenge for deep neural networks. We demonstrate that large-scale pre-trained transformers can significantly improve the state-of-the-art (SOTA) on a range of near OOD tasks across different data modalities. For instance, on CIFAR-100 vs CIFAR-10 OOD detection, we improve the AUROC from 85% (current SOTA) to more than 96% using Vision Transformers pre-trained on ImageNet-21k. On a challenging genomics OOD detection benchmark, we improve the AUROC from 66% to 77% using transformers and unsupervised pre-training. To further improve performance, we explore the few-shot outlier exposure setting where a few examples from outlier classes may be available; we show that pre-trained transformers are particularly well-suited for outlier exposure, and that the AUROC of OOD detection on CIFAR-100 vs CIFAR-10 can be improved to 98.7% with just 1 image per OOD class, and 99.46% with 10 images per OOD class. For multi-modal image-text pre-trained transformers such as CLIP, we explore a new way of using just the names of outlier classes as a sole source of information without any accompanying images, and show that this outperforms previous SOTA on standard vision OOD benchmark tasks.", "revisions": [ { "version": "v1", "updated": "2021-06-06T01:45:11.000Z" } ], "analyses": { "keywords": [ "out-of-distribution detection", "genomics ood detection benchmark", "standard vision ood benchmark tasks", "pre-trained transformers", "ood class" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }