arXiv Analytics

Sign in

arXiv:2004.14046 [cs.LG]AbstractReferencesReviewsResources

Reducing catastrophic forgetting with learning on synthetic data

Wojciech Masarczyk, Ivona Tautkute

Published 2020-04-29Version 1

Catastrophic forgetting is a problem caused by neural networks' inability to learn data in sequence. After learning two tasks in sequence, performance on the first one drops significantly. This is a serious disadvantage that prevents many deep learning applications to real-life problems where not all object classes are known beforehand; or change in data requires adjustments to the model. To reduce this problem we investigate the use of synthetic data, namely we answer a question: Is it possible to generate such data synthetically which learned in sequence does not result in catastrophic forgetting? We propose a method to generate such data in two-step optimisation process via meta-gradients. Our experimental results on Split-MNIST dataset show that training a model on such synthetic data in sequence does not result in catastrophic forgetting. We also show that our method of generating data is robust to different learning scenarios.

Related articles: Most relevant | Search more
arXiv:2505.24190 [cs.LG] (Published 2025-05-30, updated 2025-06-25)
Provably Improving Generalization of Few-Shot Models with Synthetic Data
arXiv:2212.06896 [cs.LG] (Published 2022-12-13)
In-Season Crop Progress in Unsurveyed Regions using Networks Trained on Synthetic Data
arXiv:2210.16405 [cs.LG] (Published 2022-10-28)
Evaluation of Categorical Generative Models -- Bridging the Gap Between Real and Synthetic Data