arXiv Analytics

Sign in

arXiv:1808.03064 [stat.ML]AbstractReferencesReviewsResources

Gradient and Newton Boosting for Classification and Regression

Fabio Sigrist

Published 2018-08-09Version 1

Boosting algorithms enjoy large popularity due to their high predictive accuracy on a wide array of datasets. In this article, we argue that it is important to distinguish between three types of statistical boosting algorithms: gradient and Newton boosting as well as a hybrid variant of the two. To date, both researchers and practitioners often do not discriminate between these boosting variants. We compare the different boosting algorithms on a wide range of real and simulated datasets for various choices of loss functions using trees as base learners. In addition, we introduce a novel tuning parameter for Newton boosting. We find that Newton boosting performs substantially better than the other boosting variants for classification, and that the novel tuning parameter is important for predictive accuracy

Related articles: Most relevant | Search more
arXiv:2012.10737 [stat.ML] (Published 2020-12-19)
(Decision and regression) tree ensemble based kernels for regression and classification
arXiv:2009.00089 [stat.ML] (Published 2020-08-31)
Random Forest (RF) Kernel for Regression, Classification and Survival
arXiv:1803.00276 [stat.ML] (Published 2018-03-01)
Model-Based Clustering and Classification of Functional Data