arXiv Analytics

Sign in

arXiv:1906.12340 [cs.LG]AbstractReferencesReviewsResources

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, Dawn Song

Published 2019-06-28Version 1

Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating the need for annotations. We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Additionally, self-supervision greatly benefits out-of-distribution detection on difficult, near-distribution outliers, so much so that it exceeds the performance of fully supervised methods. These results demonstrate the promise of self-supervision for improving robustness and uncertainty estimation and establish these tasks as new axes of evaluation for future self-supervised learning research.

Comments: Code and dataset available at https://github.com/hendrycks/ss-ood
Categories: cs.LG, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:1901.09960 [cs.LG] (Published 2019-01-28)
Using Pre-Training Can Improve Model Robustness and Uncertainty
arXiv:2006.07733 [cs.LG] (Published 2020-06-13)
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
arXiv:2307.08913 [cs.LG] (Published 2023-07-18)
Towards the Sparseness of Projection Head in Self-Supervised Learning