arXiv Analytics

Sign in

arXiv:2403.05966 [cs.CV]AbstractReferencesReviewsResources

Can Generative Models Improve Self-Supervised Representation Learning?

Arash Afkanpour, Vahid Reza Khazaie, Sana Ayromlou, Fereshteh Forghani

Published 2024-03-09Version 1

The rapid advancement in self-supervised learning (SSL) has highlighted its potential to leverage unlabeled data for learning powerful visual representations. However, existing SSL approaches, particularly those employing different views of the same image, often rely on a limited set of predefined data augmentations. This constrains the diversity and quality of transformations, which leads to sub-optimal representations. In this paper, we introduce a novel framework that enriches the SSL paradigm by utilizing generative models to produce semantically consistent image augmentations. By directly conditioning generative models on a source image representation, our method enables the generation of diverse augmentations while maintaining the semantics of the source image, thus offering a richer set of data for self-supervised learning. Our experimental results demonstrate that our framework significantly enhances the quality of learned visual representations. This research demonstrates that incorporating generative models into the SSL workflow opens new avenues for exploring the potential of unlabeled visual data. This development paves the way for more robust and versatile representation learning techniques.

Related articles: Most relevant | Search more
arXiv:2310.16695 [cs.CV] (Published 2023-10-25)
From Pointwise to Powerhouse: Initialising Neural Networks with Generative Models
arXiv:2210.06188 [cs.CV] (Published 2022-10-12)
Anomaly Detection using Generative Models and Sum-Product Networks in Mammography Scans
arXiv:1805.06605 [cs.CV] (Published 2018-05-17)
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models