arXiv Analytics

Sign in

arXiv:2311.17885 [stat.ML]AbstractReferencesReviewsResources

Are ensembles getting better all the time?

Pierre-Alexandre Mattei, Damien Garreau

Published 2023-11-29Version 1

Ensemble methods combine the predictions of several base models. We study whether or not including more models in an ensemble always improve its average performance. Such a question depends on the kind of ensemble considered, as well as the predictive metric chosen. We focus on situations where all members of the ensemble are a priori expected to perform as well, which is the case of several popular methods like random forests or deep ensembles. In this setting, we essentially show that ensembles are getting better all the time if, and only if, the considered loss function is convex. More precisely, in that case, the average loss of the ensemble is a decreasing function of the number of models. When the loss function is nonconvex, we show a series of results that can be summarised by the insight that ensembles of good models keep getting better, and ensembles of bad models keep getting worse. To this end, we prove a new result on the monotonicity of tail probabilities that may be of independent interest. We illustrate our results on a simple machine learning problem (diagnosing melanomas using neural nets).

Related articles: Most relevant | Search more
arXiv:2409.12799 [stat.ML] (Published 2024-09-19)
The Central Role of the Loss Function in Reinforcement Learning
arXiv:2310.00327 [stat.ML] (Published 2023-09-30)
Memorization with neural nets: going beyond the worst case
arXiv:2205.07999 [stat.ML] (Published 2022-05-16)
An Exponentially Increasing Step-size for Parameter Estimation in Statistical Models