arXiv Analytics

Sign in

arXiv:2006.14284 [cs.LG]AbstractReferencesReviewsResources

Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation

Rasool Fakoor, Jonas Mueller, Nick Erickson, Pratik Chaudhari, Alexander J. Smola

Published 2020-06-25Version 1

Automated machine learning (AutoML) can produce complex model ensembles by stacking, bagging, and boosting many individual models like trees, deep networks, and nearest neighbor estimators. While highly accurate, the resulting predictors are large, slow, and opaque as compared to their constituents. To improve the deployment of AutoML on tabular data, we propose FAST-DAD to distill arbitrarily complex ensemble predictors into individual models like boosted trees, random forests, and deep networks. At the heart of our approach is a data augmentation strategy based on Gibbs sampling from a self-attention pseudolikelihood estimator. Across 30 datasets spanning regression and binary/multiclass classification tasks, FAST-DAD distillation produces significantly better individual models than one obtains through standard training on the original data. Our individual distilled models are over 10x faster and more accurate than ensemble predictors produced by AutoML tools like H2O/AutoSklearn.

Related articles: Most relevant | Search more
arXiv:1904.12857 [cs.LG] (Published 2019-04-29)
AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications
Luo Yuanfei et al.
arXiv:2207.08815 [cs.LG] (Published 2022-07-18)
Why do tree-based models still outperform deep learning on tabular data?
arXiv:2406.00775 [cs.LG] (Published 2024-06-02)
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data