arXiv Analytics

Sign in

arXiv:2002.00573 [cs.LG]AbstractReferencesReviewsResources

Revisiting Meta-Learning as Supervised Learning

Wei-Lun Chao, Han-Jia Ye, De-Chuan Zhan, Mark Campbell, Kilian Q. Weinberger

Published 2020-02-03Version 1

Recent years have witnessed an abundance of new publications and approaches on meta-learning. This community-wide enthusiasm has sparked great insights but has also created a plethora of seemingly different frameworks, which can be hard to compare and evaluate. In this paper, we aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning. By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning. This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning. For example, we obtain a better understanding of generalization properties, and we can readily transfer well-understood techniques, such as model ensemble, pre-training, joint training, data augmentation, and even nearest neighbor based methods. We provide an intuitive analogy of these methods in the context of meta-learning and show that they give rise to significant improvements in model performance on few-shot learning.

Comments: An extended version of the paper titled "A Meta Understanding of Meta-Learning" presented in ICML 2019 Workshop on Adaptive and Multitask Learning: Algorithms & Systems
Categories: cs.LG, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:2010.00522 [cs.LG] (Published 2020-10-01)
Understanding the Role of Adversarial Regularization in Supervised Learning
arXiv:2002.03555 [cs.LG] (Published 2020-02-10)
Supervised Learning: No Loss No Cry
arXiv:2202.04513 [cs.LG] (Published 2022-02-09)
The no-free-lunch theorems of supervised learning