arXiv Analytics

Sign in

arXiv:2303.12130 [cs.CV]AbstractReferencesReviewsResources

MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation

Vitaliy Kinakh, Mariia Drozdova, Slava Voloshynovskiy

Published 2023-03-21Version 1

We present a new method of self-supervised learning and knowledge distillation based on the multi-views and multi-representations (MV-MR). The MV-MR is based on the maximization of dependence between learnable embeddings from augmented and non-augmented views, jointly with the maximization of dependence between learnable embeddings from augmented view and multiple non-learnable representations from non-augmented view. We show that the proposed method can be used for efficient self-supervised classification and model-agnostic knowledge distillation. Unlike other self-supervised techniques, our approach does not use any contrastive learning, clustering, or stop gradients. MV-MR is a generic framework allowing the incorporation of constraints on the learnable embeddings via the usage of image multi-representations as regularizers. Along this line, knowledge distillation is considered a particular case of such a regularization. MV-MR provides the state-of-the-art performance on the STL10 and ImageNet-1K datasets among non-contrastive and clustering-free methods. We show that a lower complexity ResNet50 model pretrained using proposed knowledge distillation based on the CLIP ViT model achieves state-of-the-art performance on STL10 linear evaluation. The code is available at: https://github.com/vkinakh/mv-mr

Related articles: Most relevant | Search more
arXiv:2207.10425 [cs.CV] (Published 2022-07-21)
KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS
arXiv:2006.03810 [cs.CV] (Published 2020-06-06)
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
arXiv:1904.01802 [cs.CV] (Published 2019-04-03)
Correlation Congruence for Knowledge Distillation
Baoyun Peng et al.