arXiv Analytics

Sign in

arXiv:2103.01988 [cs.CV]AbstractReferencesReviewsResources

Self-supervised Pretraining of Visual Features in the Wild

Priya Goyal, Mathilde Caron, Benjamin Lefaudeux, Min Xu, Pengchao Wang, Vivek Pai, Mannat Singh, Vitaliy Liptchinsky, Ishan Misra, Armand Joulin, Piotr Bojanowski

Published 2021-03-02Version 1

Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy, surpassing the best self-supervised pretrained model by 1% and confirming that self-supervised learning works in a real world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving 77.9% top-1 with access to only 10% of ImageNet. Code: https://github.com/facebookresearch/vissl

Related articles: Most relevant | Search more
arXiv:1705.08631 [cs.CV] (Published 2017-05-24)
Self-supervised learning of visual features through embedding images into text topic spaces
arXiv:2104.09866 [cs.CV] (Published 2021-04-20)
Distill on the Go: Online knowledge distillation in self-supervised learning
arXiv:1911.08850 [cs.CV] (Published 2019-11-20)
Self-supervised Learning of 3D Objects from Natural Images