arXiv Analytics

Sign in

arXiv:2501.19073 [cs.LG]AbstractReferencesReviewsResources

Pareto-frontier Entropy Search with Variational Lower Bound Maximization

Masanori Ishikura, Masayuki Karasuyama

Published 2025-01-31Version 1

This study considers multi-objective Bayesian optimization (MOBO) through the information gain of the Pareto-frontier. To calculate the information gain, a predictive distribution conditioned on the Pareto-frontier plays a key role, which is defined as a distribution truncated by the Pareto-frontier. However, it is usually impossible to obtain the entire Pareto-frontier in a continuous domain, and therefore, the complete truncation cannot be known. We consider an approximation of the truncate distribution by using a mixture distribution consisting of two possible approximate truncation obtainable from a subset of the Pareto-frontier, which we call over- and under-truncation. Since the optimal balance of the mixture is unknown beforehand, we propose optimizing the balancing coefficient through the variational lower bound maximization framework, by which the approximation error of the information gain can be minimized. Our empirical evaluation demonstrates the effectiveness of the proposed method particularly when the number of objective functions is large.

Related articles: Most relevant | Search more
arXiv:1906.03574 [cs.LG] (Published 2019-06-09)
Transfer Learning by Modeling a Distribution over Policies
arXiv:1603.02501 [cs.LG] (Published 2016-03-08)
Mixture Proportion Estimation via Kernel Embedding of Distributions
arXiv:2010.15100 [cs.LG] (Published 2020-10-28)
Evaluating Model Robustness to Dataset Shift