arXiv Analytics

Sign in

arXiv:2111.11223 [stat.ML]AbstractReferencesReviewsResources

Transfer Learning with Gaussian Processes for Bayesian Optimization

Petru Tighineanu, Kathrin Skubch, Paul Baireuther, Attila Reiss, Felix Berkenkamp, Julia Vinogradska

Published 2021-11-22, updated 2022-03-15Version 2

Bayesian optimization is a powerful paradigm to optimize black-box functions based on scarce and noisy data. Its data efficiency can be further improved by transfer learning from related tasks. While recent transfer models meta-learn a prior based on large amount of data, in the low-data regime methods that exploit the closed-form posterior of Gaussian processes (GPs) have an advantage. In this setting, several analytically tractable transfer-model posteriors have been proposed, but the relative advantages of these methods are not well understood. In this paper, we provide a unified view on hierarchical GP models for transfer learning, which allows us to analyze the relationship between methods. As part of the analysis, we develop a novel closed-form boosted GP transfer model that fits between existing approaches in terms of complexity. We evaluate the performance of the different approaches in large-scale experiments and highlight strengths and weaknesses of the different transfer-learning methods.

Related articles: Most relevant | Search more
arXiv:1910.05484 [stat.ML] (Published 2019-10-12)
Bayesian Optimization using Pseudo-Points
arXiv:1402.4306 [stat.ML] (Published 2014-02-18, updated 2014-02-19)
Student-t Processes as Alternatives to Gaussian Processes
arXiv:1910.09259 [stat.ML] (Published 2019-10-21)
Bayesian Optimization Allowing for Common Random Numbers