arXiv Analytics

Sign in

arXiv:2304.08309 [cs.LG]AbstractReferencesReviewsResources

Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization

Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Vincent Fortuin

Published 2023-04-17Version 1

The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks. It is theoretically compelling since it can be seen as a Gaussian process posterior with the mean function given by the neural network's maximum-a-posteriori predictive function and the covariance function induced by the empirical neural tangent kernel. However, while its efficacy has been studied in large-scale tasks like image classification, it has not been studied in sequential decision-making problems like Bayesian optimization where Gaussian processes -- with simple mean functions and kernels such as the radial basis function -- are the de-facto surrogate models. In this work, we study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility. However, we also present some pitfalls that might arise and a potential problem with the LLA when the search space is unbounded.

Related articles: Most relevant | Search more
arXiv:2211.01053 [cs.LG] (Published 2022-11-02)
Fantasizing with Dual GPs in Bayesian Optimization and Active Learning
arXiv:2303.01560 [cs.LG] (Published 2023-03-02, updated 2023-11-30)
Active Learning and Bayesian Optimization: a Unified Perspective to Learn with a Goal
arXiv:2310.15351 [cs.LG] (Published 2023-10-23)
Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency