arXiv Analytics

Sign in

arXiv:2107.14465 [cs.LG]AbstractReferencesReviewsResources

Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization

Quoc Phong Nguyen, Zhaoxuan Wu, Bryan Kian Hsiang Low, Patrick Jaillet

Published 2021-07-30Version 1

Information-based Bayesian optimization (BO) algorithms have achieved state-of-the-art performance in optimizing a black-box objective function. However, they usually require several approximations or simplifying assumptions (without clearly understanding their effects on the BO performance) and/or their generalization to batch BO is computationally unwieldy, especially with an increasing batch size. To alleviate these issues, this paper presents a novel trusted-maximizers entropy search (TES) acquisition function: It measures how much an input query contributes to the information gain on the maximizer over a finite set of trusted maximizers, i.e., inputs optimizing functions that are sampled from the Gaussian process posterior belief of the objective function. Evaluating TES requires either only a stochastic approximation with sampling or a deterministic approximation with expectation propagation, both of which are investigated and empirically evaluated using synthetic benchmark objective functions and real-world optimization problems, e.g., hyperparameter tuning of a convolutional neural network and synthesizing 'physically realizable' faces to fool a black-box face recognition system. Though TES can naturally be generalized to a batch variant with either approximation, the latter is amenable to be scaled to a much larger batch size in our experiments.

Related articles: Most relevant | Search more
arXiv:cs/0612095 [cs.LG] (Published 2006-12-19, updated 2008-09-15)
Approximation of the Two-Part MDL Code
arXiv:2102.08993 [cs.LG] (Published 2021-02-17)
Using Distance Correlation for Efficient Bayesian Optimization
arXiv:1506.02080 [cs.LG] (Published 2015-06-05)
Local Nonstationarity for Efficient Bayesian Optimization