arXiv Analytics

Sign in

arXiv:2101.11201 [cs.LG]AbstractReferencesReviewsResources

Similarity of Classification Tasks

Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro

Published 2021-01-27Version 1

Recent advances in meta-learning has led to remarkable performances on several few-shot learning benchmarks. However, such success often ignores the similarity between training and testing tasks, resulting in a potential bias evaluation. We, therefore, propose a generative approach based on a variant of Latent Dirichlet Allocation to analyse task similarity to optimise and better understand the performance of meta-learning. We demonstrate that the proposed method can provide an insightful evaluation for meta-learning algorithms on two few-shot classification benchmarks that matches common intuition: the more similar the higher performance. Based on this similarity measure, we propose a task-selection strategy for meta-learning and show that it can produce more accurate classification results than methods that randomly select training tasks.

Comments: Accepted at Neurips Meta-learning Workshop 2020
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2111.01480 [cs.LG] (Published 2021-11-02, updated 2022-08-25)
A derivation of variational message passing (VMP) for latent Dirichlet allocation (LDA)
arXiv:1701.02960 [cs.LG] (Published 2017-01-11)
Slow mixing for Latent Dirichlet allocation
arXiv:1205.1053 [cs.LG] (Published 2012-05-04)
Variable Selection for Latent Dirichlet Allocation