arXiv Analytics

Sign in

arXiv:2109.01844 [cs.CV]AbstractReferencesReviewsResources

On robustness of generative representations against catastrophic forgetting

Wojciech Masarczyk, Kamil Deja, Tomasz Trzciński

Published 2021-09-04Version 1

Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks. Although many continual learning methods are proposed to mitigate this drawback, the main question remains unanswered: what is the root cause of catastrophic forgetting? In this work, we aim at answering this question by posing and validating a set of research hypotheses related to the specificity of representations built internally by neural models. More specifically, we design a set of empirical evaluations that compare the robustness of representations in discriminative and generative models against catastrophic forgetting. We observe that representations learned by discriminative models are more prone to catastrophic forgetting than their generative counterparts, which sheds new light on the advantages of developing generative models for continual learning. Finally, our work opens new research pathways and possibilities to adopt generative models in continual learning beyond mere replay mechanisms.

Related articles: Most relevant | Search more
arXiv:2307.11386 [cs.CV] (Published 2023-07-21)
CLR: Channel-wise Lightweight Reprogramming for Continual Learning
arXiv:2106.09065 [cs.CV] (Published 2021-06-16)
SPeCiaL: Self-Supervised Pretraining for Continual Learning
arXiv:2411.06764 [cs.CV] (Published 2024-11-11)
Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning