arXiv Analytics

Sign in

arXiv:1910.07368 [cs.LG]AbstractReferencesReviewsResources

Model-Agnostic Meta-Learning using Runge-Kutta Methods

Daniel Jiwoong Im, Yibo Jiang, Nakul Verma

Published 2019-10-16Version 1

Meta-learning has emerged as an important framework for learning new tasks from just a few examples. The success of any meta-learning model depends on (i) its fast adaptation to new tasks, as well as (ii) having a shared representation across similar tasks. Here we extend the model-agnostic meta-learning (MAML) framework introduced by Finn et al. (2017) to achieve improved performance by analyzing the temporal dynamics of the optimization procedure via the Runge-Kutta method. This method enables us to gain fine-grained control over the optimization and helps us achieve both the adaptation and representation goals across tasks. By leveraging this refined control, we demonstrate that there are multiple principled ways to update MAML and show that the classic MAML optimization is simply a special case of second-order Runge-Kutta method that mainly focuses on fast-adaptation. Experiments on benchmark classification, regression and reinforcement learning tasks show that this refined control helps attain improved results.

Related articles: Most relevant | Search more
arXiv:2406.00249 [cs.LG] (Published 2024-06-01)
Privacy Challenges in Meta-Learning: An Investigation on Model-Agnostic Meta-Learning
arXiv:1812.02159 [cs.LG] (Published 2018-12-05)
The effects of negative adaptation in Model-Agnostic Meta-Learning
arXiv:1907.11864 [cs.LG] (Published 2019-07-27)
Uncertainty in Model-Agnostic Meta-Learning using Variational Inference