arXiv Analytics

Sign in

arXiv:1902.02416 [cs.LG]AbstractReferencesReviewsResources

Fast Hyperparameter Tuning using Bayesian Optimization with Directional Derivatives

Tinu Theckel Joy, Santu Rana, Sunil Gupta, Svetha Venkatesh

Published 2019-02-06Version 1

In this paper we develop a Bayesian optimization based hyperparameter tuning framework inspired by statistical learning theory for classifiers. We utilize two key facts from PAC learning theory; the generalization bound will be higher for a small subset of data compared to the whole, and the highest accuracy for a small subset of data can be achieved with a simple model. We initially tune the hyperparameters on a small subset of training data using Bayesian optimization. While tuning the hyperparameters on the whole training data, we leverage the insights from the learning theory to seek more complex models. We realize this by using directional derivative signs strategically placed in the hyperparameter search space to seek a more complex model than the one obtained with small data. We demonstrate the performance of our method on the tasks of tuning the hyperparameters of several machine learning algorithms.

Related articles: Most relevant | Search more
arXiv:1910.09347 [cs.LG] (Published 2019-10-21)
Approximate Sampling using an Accelerated Metropolis-Hastings based on Bayesian Optimization and Gaussian Processes
arXiv:2007.00939 [cs.LG] (Published 2020-07-02)
BOSH: Bayesian Optimization by Sampling Hierarchically
arXiv:2004.10599 [cs.LG] (Published 2020-04-22)
Bayesian Optimization with Output-Weighted Importance Sampling