arXiv Analytics

Sign in

arXiv:2304.05246 [cs.LG]AbstractReferencesReviewsResources

OpenAL: Evaluation and Interpretation of Active Learning Strategies

W. Jonas, A. Abraham, L. Dreyfus-Schmidt

Published 2023-04-11Version 1

Despite the vast body of literature on Active Learning (AL), there is no comprehensive and open benchmark allowing for efficient and simple comparison of proposed samplers. Additionally, the variability in experimental settings across the literature makes it difficult to choose a sampling strategy, which is critical due to the one-off nature of AL experiments. To address those limitations, we introduce OpenAL, a flexible and open-source framework to easily run and compare sampling AL strategies on a collection of realistic tasks. The proposed benchmark is augmented with interpretability metrics and statistical analysis methods to understand when and why some samplers outperform others. Last but not least, practitioners can easily extend the benchmark by submitting their own AL samplers.

Comments: Published in NeurIPS 2022 Workshop on Human in the Loop Learning, 8 pages
Categories: cs.LG, cs.AI, cs.HC
Subjects: I.2.6
Related articles: Most relevant | Search more
arXiv:cs/0212014 [cs.LG] (Published 2002-12-08)
Extraction of Keyphrases from Text: Evaluation of Four Algorithms
arXiv:1505.00401 [cs.LG] (Published 2015-05-03)
Visualization of Tradeoff in Evaluation: from Precision-Recall & PN to LIFT, ROC & BIRD
arXiv:2206.04921 [cs.LG] (Published 2022-06-10)
Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality