arXiv Analytics

Sign in

arXiv:2003.13300 [cs.LG]AbstractReferencesReviewsResources

Weighted Random Search for CNN Hyperparameter Optimization

Razvan Andonie, Adrian-Catalin Florea

Published 2020-03-30Version 1

Nearly all model algorithms used in machine learning use two different sets of parameters: the training parameters and the meta-parameters (hyperparameters). While the training parameters are learned during the training phase, the values of the hyperparameters have to be specified before learning starts. For a given dataset, we would like to find the optimal combination of hyperparameter values, in a reasonable amount of time. This is a challenging task because of its computational complexity. In previous work [11], we introduced the Weighted Random Search (WRS) method, a combination of Random Search (RS) and probabilistic greedy heuristic. In the current paper, we compare the WRS method with several state-of-the art hyperparameter optimization methods with respect to Convolutional Neural Network (CNN) hyperparameter optimization. The criterion is the classification accuracy achieved within the same number of tested combinations of hyperparameter values. According to our experiments, the WRS algorithm outperforms the other methods.

Comments: 11 pages, 2 figurs, journal article
Journal: International Journal of Computers Communications & Control, Vol 15, Nr 2, 2020
Categories: cs.LG, stat.ML
Related articles:
arXiv:2004.01628 [cs.LG] (Published 2020-04-03)
Weighted Random Search for Hyperparameter Optimization
arXiv:2108.03508 [cs.LG] (Published 2021-08-07)
The Effect of Training Parameters and Mechanisms on Decentralized Federated Learning based on MNIST Dataset
arXiv:1608.00218 [cs.LG] (Published 2016-07-31)
Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training