arXiv Analytics

Sign in

arXiv:1911.01914 [cs.LG]AbstractReferencesReviewsResources

A Comparative Analysis of XGBoost

Candice Bentéjac, Anna Csörgő, Gonzalo Martínez-Muñoz

Published 2019-11-05Version 1

XGBoost is a scalable ensemble technique based on gradient boosting that has demonstrated to be a reliable and efficient machine learning challenge solver. This work proposes a practical analysis of how this novel technique works in terms of training speed, generalization performance and parameter setup. In addition, a comprehensive comparison between XGBoost, random forests and gradient boosting has been performed using carefully tuned models as well as using the default settings. The results of this comparison may indicate that XGBoost is not necessarily the best choice under all circumstances. Finally an extensive analysis of XGBoost parametrization tuning process is carried out.

Related articles: Most relevant | Search more
arXiv:1909.12098 [cs.LG] (Published 2019-09-26)
Sequential Training of Neural Networks with Gradient Boosting
arXiv:2204.06895 [cs.LG] (Published 2022-04-14)
Gradient boosting for convex cone predict and optimize problems
arXiv:2209.12309 [cs.LG] (Published 2022-09-25)
Feature Encodings for Gradient Boosting with Automunge