arXiv Analytics

Sign in

arXiv:2104.12174 [cs.LG]AbstractReferencesReviewsResources

Demystification of Few-shot and One-shot Learning

Ivan Y. Tyukin, Alexander N. Gorban, Muhammad H. Alkhudaydi, Qinghua Zhou

Published 2021-04-25Version 1

Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice. Classical statistical learning theories do not fully explain why few- or one-shot learning is at all possible since traditional generalisation bounds normally require large training and testing samples to be meaningful. This sharply contrasts with numerous examples of successful one- and few-shot learning systems and applications. In this work we present mathematical foundations for a theory of one-shot and few-shot learning and reveal conditions specifying when such learning schemes are likely to succeed. Our theory is based on intrinsic properties of high-dimensional spaces. We show that if the ambient or latent decision space of a learning machine is sufficiently high-dimensional than a large class of objects in this space can indeed be easily learned from few examples provided that certain data non-concentration conditions are met.

Comments: IEEE International Joint Conference on Neural Networks, IJCNN 2021
Categories: cs.LG, cs.AI, math.ST, stat.TH
Subjects: 68T05, 68T07
Related articles: Most relevant | Search more
arXiv:2201.09202 [cs.LG] (Published 2022-01-23)
One-Shot Learning on Attributed Sequences
arXiv:2105.00202 [cs.LG] (Published 2021-05-01)
One-shot learning for acoustic identification of bird species in non-stationary environments
arXiv:2308.15885 [cs.LG] (Published 2023-08-30)
Towards One-Shot Learning for Text Classification using Inductive Logic Programming