arXiv Analytics

Sign in

arXiv:2104.01836 [stat.ML]AbstractReferencesReviewsResources

Stopping Criterion for Active Learning Based on Error Stability

Hideaki Ishibashi, Hideitsu Hino

Published 2021-04-05Version 1

Active learning is a framework for supervised learning to improve the predictive performance by adaptively annotating a small number of samples. To realize efficient active learning, both an acquisition function that determines the next datum and a stopping criterion that determines when to stop learning should be considered. In this study, we propose a stopping criterion based on error stability, which guarantees that the change in generalization error upon adding a new sample is bounded by the annotation cost and can be applied to any Bayesian active learning. We demonstrate that the proposed criterion stops active learning at the appropriate timing for various learning models and real datasets.

Comments: submitted to JMLR
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2005.07402 [stat.ML] (Published 2020-05-15)
Stopping criterion for active learning based on deterministic generalization bounds
arXiv:1808.02026 [stat.ML] (Published 2018-08-06)
Active Learning based on Data Uncertainty and Model Sensitivity
arXiv:1905.12791 [stat.ML] (Published 2019-05-29)
The Label Complexity of Active Learning from Observational Data