{ "id": "2004.14046", "version": "v1", "published": "2020-04-29T09:45:06.000Z", "updated": "2020-04-29T09:45:06.000Z", "title": "Reducing catastrophic forgetting with learning on synthetic data", "authors": [ "Wojciech Masarczyk", "Ivona Tautkute" ], "categories": [ "cs.LG", "stat.ML" ], "abstract": "Catastrophic forgetting is a problem caused by neural networks' inability to learn data in sequence. After learning two tasks in sequence, performance on the first one drops significantly. This is a serious disadvantage that prevents many deep learning applications to real-life problems where not all object classes are known beforehand; or change in data requires adjustments to the model. To reduce this problem we investigate the use of synthetic data, namely we answer a question: Is it possible to generate such data synthetically which learned in sequence does not result in catastrophic forgetting? We propose a method to generate such data in two-step optimisation process via meta-gradients. Our experimental results on Split-MNIST dataset show that training a model on such synthetic data in sequence does not result in catastrophic forgetting. We also show that our method of generating data is robust to different learning scenarios.", "revisions": [ { "version": "v1", "updated": "2020-04-29T09:45:06.000Z" } ], "analyses": { "keywords": [ "synthetic data", "reducing catastrophic forgetting", "two-step optimisation process", "real-life problems", "learn data" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }