arXiv Analytics

Sign in

arXiv:2004.06638 [cs.CV]AbstractReferencesReviewsResources

Distilling Localization for Self-Supervised Representation Learning

Nanxuan Zhao, Zhirong Wu, Rynson W. H. Lau, Stephen Lin

Published 2020-04-14Version 1

For high-level visual recognition, self-supervised learning defines and makes use of proxy tasks such as colorization and visual tracking to learn a semantic representation useful for distinguishing objects. In this paper, through visualizing and diagnosing classification errors, we observe that current self-supervised models are ineffective at localizing the foreground object, limiting their ability to extract discriminative high-level features. To address this problem, we propose a data-driven approach for learning invariance to backgrounds. It first estimates foreground saliency in images and then creates augmentations by copy-and-pasting the foreground onto a variety of backgrounds. The learning follows an instance discrimination approach which encourages the features of augmentations from the same image to be similar. In this way, the representation is trained to disregard background content and focus on the foreground. We study a variety of saliency estimation methods, and find that most methods lead to improvements for self-supervised learning. With this approach, strong performance is achieved for self-supervised learning on ImageNet classification, and also for transfer learning to object detection on PASCAL VOC 2007.

Related articles: Most relevant | Search more
arXiv:2205.14418 [cs.CV] (Published 2022-05-28)
Data Generation for Satellite Image Classification Using Self-Supervised Representation Learning
arXiv:2206.06461 [cs.CV] (Published 2022-06-13)
Self-Supervised Representation Learning With MUlti-Segmental Informational Coding (MUSIC)
arXiv:2311.03629 [cs.CV] (Published 2023-11-07)
Random Field Augmentations for Self-Supervised Representation Learning