arXiv Analytics

Sign in

arXiv:1103.1790 [math.ST]AbstractReferencesReviewsResources

Rates of convergence in active learning

Steve Hanneke

Published 2011-03-09Version 1

We study the rates of convergence in generalization error achievable by active learning under various types of label noise. Additionally, we study the general problem of model selection for active learning with a nested hierarchy of hypothesis classes and propose an algorithm whose error rate provably converges to the best achievable error among classifiers in the hierarchy at a rate adaptive to both the complexity of the optimal classifier and the noise conditions. In particular, we state sufficient conditions for these rates to be dramatically faster than those achievable by passive learning.

Comments: Published in at http://dx.doi.org/10.1214/10-AOS843 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)
Journal: Annals of Statistics 2011, Vol. 39, No. 1, 333-361
Categories: math.ST, stat.TH
Related articles: Most relevant | Search more
arXiv:1207.3772 [math.ST] (Published 2012-07-16, updated 2015-03-14)
Surrogate Losses in Passive and Active Learning
arXiv:1710.07926 [math.ST] (Published 2017-10-22)
On the rates of convergence of Parallelized Averaged Stochastic Gradient Algorithms
arXiv:0705.1767 [math.ST] (Published 2007-05-12)
Rate of Convergence in Recursive Parameter Estimation procedures