arXiv Analytics

Sign in

arXiv:2103.02265 [cs.LG]AbstractReferencesReviewsResources

Meta-Learning with Variational Bayes

Lucas D. Lingle

Published 2021-03-03Version 1

The field of meta-learning seeks to improve the ability of today's machine learning systems to adapt efficiently to small amounts of data. Typically this is accomplished by training a system with a parametrized update rule to improve a task-relevant objective based on supervision or a reward function. However, in many domains of practical interest, task data is unlabeled, or reward functions are unavailable. In this paper we introduce a new approach to address the more general problem of generative meta-learning, which we argue is an important prerequisite for obtaining human-level cognitive flexibility in artificial agents, and can benefit many practical applications along the way. Our contribution leverages the AEVB framework and mean-field variational Bayes, and creates fast-adapting latent-space generative models. At the heart of our contribution is a new result, showing that for a broad class of deep generative latent variable models, the relevant VB updates do not depend on any generative neural network.

Related articles: Most relevant | Search more
arXiv:2004.11149 [cs.LG] (Published 2020-04-17)
A Comprehensive Overview and Survey of Recent Advances in Meta-Learning
arXiv:1808.10406 [cs.LG] (Published 2018-08-30)
Towards Reproducible Empirical Research in Meta-Learning
arXiv:1810.02334 [cs.LG] (Published 2018-10-04)
Unsupervised Learning via Meta-Learning