arXiv Analytics

Sign in

arXiv:2206.03012 [cs.CV]AbstractReferencesReviewsResources

TriBYOL: Triplet BYOL for Self-Supervised Representation Learning

Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama

Published 2022-06-07Version 1

This paper proposes a novel self-supervised learning method for learning better representations with small batch sizes. Many self-supervised learning methods based on certain forms of the siamese network have emerged and received significant attention. However, these methods need to use large batch sizes to learn good representations and require heavy computational resources. We present a new triplet network combined with a triple-view loss to improve the performance of self-supervised representation learning with small batch sizes. Experimental results show that our method can drastically outperform state-of-the-art self-supervised learning methods on several datasets in small-batch cases. Our method provides a feasible solution for self-supervised learning with real-world high-resolution images that uses small batch sizes.

Related articles: Most relevant | Search more
arXiv:2209.06067 [cs.CV] (Published 2022-09-13)
SeRP: Self-Supervised Representation Learning Using Perturbed Point Clouds
arXiv:2012.00868 [cs.CV] (Published 2020-12-01)
Towards Good Practices in Self-supervised Representation Learning
arXiv:2206.06461 [cs.CV] (Published 2022-06-13)
Self-Supervised Representation Learning With MUlti-Segmental Informational Coding (MUSIC)