arXiv:2009.09172 [cs.LG]AbstractReferencesReviewsResources
Few-shot learning using pre-training and shots, enriched by pre-trained samples
Published 2020-09-19Version 1
We use the EMNIST dataset of handwritten digits to test a simple approach for few-shot learning. A fully connected neural network is pre-trained with a subset of the 10 digits and used for few-shot learning with untrained digits. Two basic ideas are introduced: during few-shot learning the learning of the first layer is disabled, and for every shot a previously unknown digit is used together with four previously trained digits for the gradient descend, until a predefined threshold condition is fulfilled. This way we reach about 90% accuracy after 10 shots.
Related articles: Most relevant | Search more
Few-Shot Learning on Graphs: from Meta-learning to Pre-training and Prompting
Xingtong Yu et al.
arXiv:2205.07874 [cs.LG] (Published 2022-05-13)
Revisiting the Updates of a Pre-trained Model for Few-shot Learning
arXiv:1904.04339 [cs.LG] (Published 2019-04-08)
L2AE-D: Learning to Aggregate Embeddings for Few-shot Learning with Meta-level Dropout