arXiv Analytics

Sign in

arXiv:2204.06895 [cs.LG]AbstractReferencesReviewsResources

Gradient boosting for convex cone predict and optimize problems

Andrew Butler, Roy H. Kwon

Published 2022-04-14Version 1

Many problems in engineering and statistics involve both predictive forecasting and decision-based optimization. Traditionally, predictive models are optimized independently from the final decision-based optimization problem. In contrast, a `smart, predict then optimize' (SPO) framework optimizes prediction models to explicitly minimize the final downstream decision loss. In this paper we present dboost, a gradient boosting algorithm for training prediction model ensembles to minimize decision regret. The dboost framework supports any convex optimization program that can be cast as convex quadratic cone program and gradient boosting is performed by implicit differentiation of a custom fixed-point mapping. To our knowledge, the dboost framework is the first general purpose implementation of gradient boosting to predict and optimize problems. Experimental results comparing with state-of-the-art SPO methods show that dboost can further reduce out-of-sample decision regret.

Related articles: Most relevant | Search more
arXiv:1909.12098 [cs.LG] (Published 2019-09-26)
Sequential Training of Neural Networks with Gradient Boosting
arXiv:2209.12309 [cs.LG] (Published 2022-09-25)
Feature Encodings for Gradient Boosting with Automunge
arXiv:1703.00377 [cs.LG] (Published 2017-03-01)
Gradient Boosting on Stochastic Data Streams