arXiv Analytics

Sign in

arXiv:2006.14845 [stat.ML]AbstractReferencesReviewsResources

Transfer Learning via $\ell_1$ Regularization

Masaaki Takada, Hironori Fujisawa

Published 2020-06-26Version 1

Machine learning algorithms typically require abundant data under a stationary environment. However, environments are nonstationary in many real-world applications. Critical issues lie in how to effectively adapt models under an ever-changing environment. We propose a method for transferring knowledge from a source domain to a target domain via $\ell_1$ regularization. We incorporate $\ell_1$ regularization of differences between source parameters and target parameters, in addition to an ordinary $\ell_1$ regularization. Hence, our method yields sparsity for both the estimates themselves and changes of the estimates. The proposed method has a tight estimation error bound under a stationary environment, and the estimate remains unchanged from the source estimate under small residuals. Moreover, the estimate is consistent with the underlying function, even when the source estimate is mistaken due to nonstationarity. Empirical results demonstrate that the proposed method effectively balances stability and plasticity.

Related articles: Most relevant | Search more
arXiv:2305.00520 [stat.ML] (Published 2023-04-30)
The ART of Transfer Learning: An Adaptive and Robust Pipeline
arXiv:2002.04495 [stat.ML] (Published 2020-02-11)
On transfer learning of neural networks using bi-fidelity data for uncertainty propagation
arXiv:2410.08194 [stat.ML] (Published 2024-10-10)
Features are fate: a theory of transfer learning in high-dimensional regression