{ "id": "1909.11722", "version": "v1", "published": "2019-09-25T19:33:05.000Z", "updated": "2019-09-25T19:33:05.000Z", "title": "A Theoretical Analysis of the Number of Shots in Few-Shot Learning", "authors": [ "Tianshi Cao", "Marc Law", "Sanja Fidler" ], "comment": "15 pages incl. appendix, 6 figures", "categories": [ "cs.LG", "cs.AI", "stat.ML" ], "abstract": "Few-shot classification is the task of predicting the category of an example from a set of few labeled examples. The number of labeled examples per category is called the number of shots (or shot number). Recent works tackle this task through meta-learning, where a meta-learner extracts information from observed tasks during meta-training to quickly adapt to new tasks during meta-testing. In this formulation, the number of shots exploited during meta-training has an impact on the recognition performance at meta-test time. Generally, the shot number used in meta-training should match the one used in meta-testing to obtain the best performance. We introduce a theoretical analysis of the impact of the shot number on Prototypical Networks, a state-of-the-art few-shot classification method. From our analysis, we propose a simple method that is robust to the choice of shot number used during meta-training, which is a crucial hyperparameter. The performance of our model trained for an arbitrary meta-training shot number shows great performance for different values of meta-testing shot numbers. We experimentally demonstrate our approach on different few-shot classification benchmarks.", "revisions": [ { "version": "v1", "updated": "2019-09-25T19:33:05.000Z" } ], "analyses": { "keywords": [ "theoretical analysis", "few-shot learning", "performance", "state-of-the-art few-shot classification method", "labeled examples" ], "note": { "typesetting": "TeX", "pages": 15, "language": "en", "license": "arXiv", "status": "editable" } } }