arXiv Analytics

Sign in

arXiv:2505.11323 [stat.ML]AbstractReferencesReviewsResources

Convergence Rates of Constrained Expected Improvement

Haowei Wang, Jingyi Wang, Zhongxiang Dai, Nai-Yuan Chiang, Szu Hui Ng, Cosmin G. Petra

Published 2025-05-16Version 1

Constrained Bayesian optimization (CBO) methods have seen significant success in black-box optimization with constraints, and one of the most commonly used CBO methods is the constrained expected improvement (CEI) algorithm. CEI is a natural extension of the expected improvement (EI) when constraints are incorporated. However, the theoretical convergence rate of CEI has not been established. In this work, we study the convergence rate of CEI by analyzing its simple regret upper bound. First, we show that when the objective function $f$ and constraint function $c$ are assumed to each lie in a reproducing kernel Hilbert space (RKHS), CEI achieves the convergence rates of $\mathcal{O} \left(t^{-\frac{1}{2}}\log^{\frac{d+1}{2}}(t) \right) \ \text{and }\ \mathcal{O}\left(t^{\frac{-\nu}{2\nu+d}} \log^{\frac{\nu}{2\nu+d}}(t)\right)$ for the commonly used squared exponential and Mat\'{e}rn kernels, respectively. Second, we show that when $f$ and $c$ are assumed to be sampled from Gaussian processes (GPs), CEI achieves the same convergence rates with a high probability. Numerical experiments are performed to validate the theoretical analysis.

Related articles: Most relevant | Search more
arXiv:2010.13934 [stat.ML] (Published 2020-10-26)
A Homotopic Method to Solve the Lasso Problems with an Improved Upper Bound of Convergence Rate
arXiv:2205.15447 [stat.ML] (Published 2022-05-30)
Holistic Generalized Linear Models
arXiv:2208.07243 [stat.ML] (Published 2022-08-15)
Convergence Rates for Stochastic Approximation on a Boundary