arXiv Analytics

Sign in

arXiv:1812.02159 [cs.LG]AbstractReferencesReviewsResources

The effects of negative adaptation in Model-Agnostic Meta-Learning

Tristan Deleu, Yoshua Bengio

Published 2018-12-05Version 1

The capacity of meta-learning algorithms to quickly adapt to a variety of tasks, including ones they did not experience during meta-training, has been a key factor in the recent success of these methods on few-shot learning problems. This particular advantage of using meta-learning over standard supervised or reinforcement learning is only well founded under the assumption that the adaptation phase does improve the performance of our model on the task of interest. However, in the classical framework of meta-learning, this constraint is only mildly enforced, if not at all, and we only see an improvement on average over a distribution of tasks. In this paper, we show that the adaptation in an algorithm like MAML can significantly decrease the performance of an agent in a meta-reinforcement learning setting, even on a range of meta-training tasks.

Comments: Workshop on Meta-Learning - 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montreal, Canada
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1910.07368 [cs.LG] (Published 2019-10-16)
Model-Agnostic Meta-Learning using Runge-Kutta Methods
arXiv:2406.00249 [cs.LG] (Published 2024-06-01)
Privacy Challenges in Meta-Learning: An Investigation on Model-Agnostic Meta-Learning
arXiv:1907.11864 [cs.LG] (Published 2019-07-27)
Uncertainty in Model-Agnostic Meta-Learning using Variational Inference