arXiv Analytics

Sign in

arXiv:1510.06423 [stat.ML]AbstractReferencesReviewsResources

Optimization as Estimation with Gaussian Processes in Bandit Settings

Zi Wang, Bolei Zhou, Stefanie Jegelka

Published 2015-10-21Version 1

Recently, there has been rising interest in Bayesian optimization -- the optimization of an unknown function with assumptions usually expressed by a Gaussian Process (GP) prior. We study an optimization strategy that directly uses a maximum a posteriori (MAP) estimate of the argmax of the function. This strategy offers both practical and theoretical advantages: no tradeoff parameter needs to be selected, and, moreover, we establish close connections to the popular GP-UCB and GP-PI strategies. The MAP criterion can be understood as automatically and adaptively trading off exploration and exploitation in GP-UCB and GP-PI. We illustrate the effects of this adaptive tuning via bounds on the regret as well as an extensive empirical evaluation on robotics and vision tasks, demonstrating the robustness of this strategy for a range of performance criteria.

Related articles: Most relevant | Search more
arXiv:1906.00226 [stat.ML] (Published 2019-06-01)
Patient-Specific Effects of Medication Using Latent Force Models with Gaussian Processes
arXiv:1901.11356 [stat.ML] (Published 2019-01-31)
Functional Regularisation for Continual Learning using Gaussian Processes
arXiv:1805.03463 [stat.ML] (Published 2018-05-09)
Dealing with Categorical and Integer-valued Variables in Bayesian Optimization with Gaussian Processes