arXiv Analytics

Sign in

arXiv:1904.06960 [cs.LG]AbstractReferencesReviewsResources

On the Performance of Differential Evolution for Hyperparameter Tuning

Mischa Schmidt, Shahd Safarani, Julia Gastinger, Tobias Jacobs, Sebastien Nicolas, Anett Schülke

Published 2019-04-15Version 1

Automated hyperparameter tuning aspires to facilitate the application of machine learning for non-experts. In the literature, different optimization approaches are applied for that purpose. This paper investigates the performance of Differential Evolution for tuning hyperparameters of supervised learning algorithms for classification tasks. This empirical study involves a range of different machine learning algorithms and datasets with various characteristics to compare the performance of Differential Evolution with Sequential Model-based Algorithm Configuration (SMAC), a reference Bayesian Optimization approach. The results indicate that Differential Evolution outperforms SMAC for most datasets when tuning a given machine learning algorithm - particularly when breaking ties in a first-to-report fashion. Only for the tightest of computational budgets SMAC performs better. On small datasets, Differential Evolution outperforms SMAC by 19% (37% after tie-breaking). In a second experiment across a range of representative datasets taken from the literature, Differential Evolution scores 15% (23% after tie-breaking) more wins than SMAC.

Comments: 2019 International Joint Conference on Neural Networks (IJCNN)
Categories: cs.LG, cs.NE, stat.ML
Related articles: Most relevant | Search more
arXiv:1804.07824 [cs.LG] (Published 2018-04-20)
Autotune: A Derivative-free Optimization Framework for Hyperparameter Tuning
arXiv:2009.06390 [cs.LG] (Published 2020-09-10)
IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning
arXiv:1812.02207 [cs.LG] (Published 2018-12-05)
An empirical study on hyperparameter tuning of decision trees