arXiv Analytics

Sign in

arXiv:2006.11942 [stat.ML]AbstractReferencesReviewsResources

Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent

Mehdi Abbana Bennani, Masashi Sugiyama

Published 2020-06-21Version 1

In continual learning settings, deep neural networks are prone to catastrophic forgetting. Orthogonal Gradient Descent (Farajtabar et al., 2019) achieves state-of-the-art results in practice for continual learning, although no theoretical guarantees have been proven yet. We derive the first generalisation guarantees for the algorithm OGD for continual learning, for overparameterized neural networks. We find that OGD is only provably robust to catastrophic forgetting across a single task. We propose OGD+, prove that it is robust to catastrophic forgetting across an arbitrary number of tasks, and that it verifies tighter generalisation bounds. Our experiments show that OGD+ achieves state-of-the-art results on settings with a large number of tasks, even though the models are not overparameterized. Also, we derive a closed form expression of the learned models through tasks, as a recursive kernel regression relation, which captures the transferability of knowledge through tasks. Finally, we quantify theoretically the impact of task ordering on the generalisation error, which highlights the importance of the curriculum for lifelong learning.

Related articles: Most relevant | Search more
arXiv:1911.09514 [stat.ML] (Published 2019-11-21)
Continual Learning with Adaptive Weights (CLAW)
arXiv:1903.05202 [stat.ML] (Published 2019-03-12)
Continual Learning in Practice
arXiv:2112.01653 [stat.ML] (Published 2021-12-03, updated 2022-03-18)
Learning Curves for Continual Learning in Neural Networks: Self-Knowledge Transfer and Forgetting