arXiv Analytics

Sign in

arXiv:2006.06606 [cs.CV]AbstractReferencesReviewsResources

What makes instance discrimination good for transfer learning?

Nanxuan Zhao, Zhirong Wu, Rynson W. H. Lau, Stephen Lin

Published 2020-06-11Version 1

Unsupervised visual pretraining based on the instance discrimination pretext task has shown significant progress. Notably, in the recent work of MoCo, unsupervised pretraining has shown to surpass the supervised counterpart for finetuning downstream applications such as object detection on PASCAL VOC. It comes as a surprise that image annotations would be better left unused for transfer learning. In this work, we investigate the following problems: What makes instance discrimination pretraining good for transfer learning? What knowledge is actually learned and transferred from unsupervised pretraining? From this understanding of unsupervised pretraining, can we make supervised pretraining great again? Our findings are threefold. First, what truly matters for this detection transfer is low-level and mid-level representations, not high-level representations. Second, the intra-category invariance enforced by the traditional supervised model weakens transferability by increasing task misalignment. Finally, supervised pretraining can be strengthened by following an exemplar-based approach without explicit constraints among the instances within the same category.

Related articles: Most relevant | Search more
arXiv:1610.05861 [cs.CV] (Published 2016-10-19)
StuffNet: Using 'Stuff' to Improve Object Detection
arXiv:1811.08737 [cs.CV] (Published 2018-11-21)
SpotTune: Transfer Learning through Adaptive Fine-tuning
arXiv:1811.04863 [cs.CV] (Published 2018-11-12)
A Framework of Transfer Learning in Object Detection for Embedded Systems