arXiv Analytics

Sign in

arXiv:1604.01348 [stat.ML]AbstractReferencesReviewsResources

Bayesian Optimization with Exponential Convergence

Kenji Kawaguchi, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Published 2016-04-05Version 1

This paper presents a Bayesian optimization method with exponential convergence without the need of auxiliary optimization and without the delta-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence requires access to the delta-cover sampling, which was considered to be impractical. Our approach eliminates both requirements and achieves an exponential convergence rate.

Comments: In NIPS 2015 (Advances in Neural Information Processing Systems 2015)
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2202.10923 [stat.ML] (Published 2022-02-21)
MSTGD:A Memory Stochastic sTratified Gradient Descent Method with an Exponential Convergence Rate
arXiv:1806.05438 [stat.ML] (Published 2018-06-14)
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors
arXiv:1911.05350 [stat.ML] (Published 2019-11-13)
Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features