arXiv:2303.12130 [cs.CV]AbstractReferencesReviewsResources Classifications Subjects Themes Keywords knowledge distillation, self-supervised learning, multi-representations, clip vit model achieves state-of-the-art, vit model achieves state-of-the-art performance Tags Journal Information Publisher Journal Year Month Volume Number Pages DOI URL Miscellaneous Typesetting Pages Language License Submit Reset