arXiv Analytics

Sign in

arXiv:1907.05444 [cs.LG]AbstractReferencesReviewsResources

On the Optimality of Trees Generated by ID3

Alon Brutzkus, Amit Daniely, Eran Malach

Published 2019-07-11Version 1

Since its inception in the 1980s, ID3 has become one of the most successful and widely used algorithms for learning decision trees. However, its theoretical properties remain poorly understood. In this work, we analyze the heuristic of growing a decision tree with ID3 for a limited number of iterations $t$ and given that nodes are split as in the case of exact information gain and probability computations. In several settings, we provide theoretical and empirical evidence that the TopDown variant of ID3, introduced by Kearns and Mansour (1996), produces trees with optimal or near-optimal test error among all trees with $t$ internal nodes. We prove optimality in the case of learning conjunctions under product distributions and learning read-once DNFs with 2 terms under the uniform distribition. Using efficient dynamic programming algorithms, we empirically show that TopDown generates trees that are near-optimal ($\sim \%1$ difference from optimal test error) in a large number of settings for learning read-once DNFs under product distributions.

Related articles: Most relevant | Search more
arXiv:2010.02506 [cs.LG] (Published 2020-10-02)
Interactive Reinforcement Learning for Feature Selection with Decision Tree in the Loop
arXiv:1409.7461 [cs.LG] (Published 2014-09-26)
Autoencoder Trees
arXiv:2003.00360 [cs.LG] (Published 2020-02-29)
Decision Trees for Decision-Making under the Predict-then-Optimize Framework