arXiv Analytics

Sign in

arXiv:1904.08479 [cs.CV]AbstractReferencesReviewsResources

LCC: Learning to Customize and Combine Neural Networks for Few-Shot Learning

Yaoyao Liu, Qianru Sun, An-An Liu, Yuting Su, Bernt Schiele, Tat-Seng Chua

Published 2019-04-17Version 1

Meta-learning has been shown to be an effective strategy for few-shot learning. The key idea is to leverage a large number of similar few-shot tasks in order to meta-learn how to best initiate a (single) base-learner for novel few-shot tasks. While meta-learning how to initialize a base-learner has shown promising results, it is well known that hyperparameter settings such as the learning rate and the weighting of the regularization term are important to achieve best performance. We thus propose to also meta-learn these hyperparameters and in fact learn a time- and layer-varying scheme for learning a base-learner on novel tasks. Additionally, we propose to learn not only a single base-learner but an ensemble of several base-learners to obtain more robust results. While ensembles of learners have shown to improve performance in various settings, this is challenging for few-shot learning tasks due to the limited number of training samples. Therefore, our approach also aims to meta-learn how to effectively combine several base-learners. We conduct extensive experiments and report top performance for five-class few-shot recognition tasks on two challenging benchmarks: miniImageNet and Fewshot-CIFAR100 (FC100).

Related articles: Most relevant | Search more
arXiv:1904.08502 [cs.CV] (Published 2019-04-09)
Few-Shot Learning with Localization in Realistic Settings
arXiv:1910.03560 [cs.CV] (Published 2019-10-08)
When Does Self-supervision Improve Few-shot Learning?
arXiv:2003.04390 [cs.CV] (Published 2020-03-09)
A New Meta-Baseline for Few-Shot Learning