arXiv Analytics

Sign in

arXiv:2310.12244 [cs.LG]AbstractReferencesReviewsResources

A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm

Haizhou Shi, Hao Wang

Published 2023-10-18Version 1

Domain incremental learning aims to adapt to a sequence of domains with access to only a small subset of data (i.e., memory) from previous domains. Various methods have been proposed for this problem, but it is still unclear how they are related and when practitioners should choose one method over another. In response, we propose a unified framework, dubbed Unified Domain Incremental Learning (UDIL), for domain incremental learning with memory. Our UDIL **unifies** various existing methods, and our theoretical analysis shows that UDIL always achieves a tighter generalization error bound compared to these methods. The key insight is that different existing methods correspond to our bound with different **fixed** coefficients; based on insights from this unification, our UDIL allows **adaptive** coefficients during training, thereby always achieving the tightest bound. Empirical results show that our UDIL outperforms the state-of-the-art domain incremental learning methods on both synthetic and real-world datasets. Code will be available at https://github.com/Wang-ML-Lab/unified-continual-learning.

Related articles: Most relevant | Search more
arXiv:1706.06569 [cs.LG] (Published 2017-06-20)
A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization
arXiv:1601.00318 [cs.LG] (Published 2016-01-03)
A Unified Approach for Learning the Parameters of Sum-Product Networks
arXiv:2311.13718 [cs.LG] (Published 2023-11-22)
A Unified Approach to Count-Based Weakly-Supervised Learning