arXiv Analytics

Sign in

arXiv:2206.01563 [cs.LG]AbstractReferencesReviewsResources

Optimal Weak to Strong Learning

Kasper Green Larsen, Martin Ritzert

Published 2022-06-03Version 1

The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data. We present a new algorithm that constructs a strong learner from a weak learner but uses less training data than AdaBoost and all other weak to strong learners to achieve the same generalization bounds. A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal. Hence, this work settles the sample complexity of the classic problem of constructing a strong learner from a weak learner.

Related articles: Most relevant | Search more
arXiv:1812.04513 [cs.LG] (Published 2018-12-11)
The Impact of Quantity of Training Data on Recognition of Eating Gestures
arXiv:2103.03399 [cs.LG] (Published 2021-03-05)
Representation Matters: Assessing the Importance of Subgroup Allocations in Training Data
arXiv:1805.09898 [cs.LG] (Published 2018-05-24)
Generative Model: Membership Attack,Generalization and Diversity