arXiv Analytics

Sign in

arXiv:2106.09065 [cs.CV]AbstractReferencesReviewsResources

SPeCiaL: Self-Supervised Pretraining for Continual Learning

Lucas Caccia, Joelle Pineau

Published 2021-06-16Version 1

This paper presents SPeCiaL: a method for unsupervised pretraining of representations tailored for continual learning. Our approach devises a meta-learning objective that differentiates through a sequential learning process. Specifically, we train a linear model over the representations to match different augmented views of the same image together, each view presented sequentially. The linear model is then evaluated on both its ability to classify images it just saw, and also on images from previous iterations. This gives rise to representations that favor quick knowledge retention with minimal forgetting. We evaluate SPeCiaL in the Continual Few-Shot Learning setting, and show that it can match or outperform other supervised pretraining approaches.

Related articles: Most relevant | Search more
arXiv:2409.13550 [cs.CV] (Published 2024-09-20)
A preliminary study on continual learning in computer vision using Kolmogorov-Arnold Networks
arXiv:2411.06764 [cs.CV] (Published 2024-11-11)
Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning
arXiv:2405.14318 [cs.CV] (Published 2024-05-23)
Adaptive Rentention & Correction for Continual Learning