arXiv Analytics

Sign in

arXiv:1905.06549 [cs.LG]AbstractReferencesReviewsResources

TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Sung Whan Yoon, Jun Seo, Jaekyun Moon

Published 2019-05-16Version 1

Handling previously unseen tasks after given only a few training examples continues to be a tough challenge in machine learning. We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. The training loss is obtained based on a distance metric between the query and the reference vectors in the projection space. Excellent generalization results in this way. When tested on the Omniglot, miniImageNet and tieredImageNet datasets, we obtain state of the art classification accuracies under various few-shot scenarios.

Comments: To appear in 36th International Conference on Machine Learning (ICML), Long Beach, California, PMLR 97, 2019
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1911.11090 [cs.LG] (Published 2019-11-25)
Meta-Learning of Neural Architectures for Few-Shot Learning
arXiv:2205.15619 [cs.LG] (Published 2022-05-31)
Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
arXiv:1902.04552 [cs.LG] (Published 2019-02-12)
Infinite Mixture Prototypes for Few-Shot Learning