arXiv Analytics

Sign in

arXiv:1905.09680 [cs.LG]AbstractReferencesReviewsResources

DEEP-BO for Hyperparameter Optimization of Deep Networks

Hyunghun Cho, Yongjin Kim, Eunjung Lee, Daeyoung Choi, Yongjae Lee, Wonjong Rhee

Published 2019-05-23Version 1

The performance of deep neural networks (DNN) is very sensitive to the particular choice of hyper-parameters. To make it worse, the shape of the learning curve can be significantly affected when a technique like batchnorm is used. As a result, hyperparameter optimization of deep networks can be much more challenging than traditional machine learning models. In this work, we start from well known Bayesian Optimization solutions and provide enhancement strategies specifically designed for hyperparameter optimization of deep networks. The resulting algorithm is named as DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO easily outperforms or shows comparable performance with some of the well-known solutions including GP-Hedge, Hyperband, BOHB, Median Stopping Rule, and Learning Curve Extrapolation. The code used is made publicly available at https://github.com/snu-adsl/DEEP-BO.

Related articles: Most relevant | Search more
arXiv:1907.08475 [cs.LG] (Published 2019-07-19)
Representational Capacity of Deep Neural Networks -- A Computing Study
arXiv:1902.02366 [cs.LG] (Published 2019-02-06)
Negative eigenvalues of the Hessian in deep neural networks
arXiv:1904.08050 [cs.LG] (Published 2019-04-17)
Sparseout: Controlling Sparsity in Deep Networks