arXiv Analytics

Sign in

arXiv:2004.12654 [math.NA]AbstractReferencesReviewsResources

Integration in reproducing kernel Hilbert spaces of Gaussian kernels

Toni Karvonen, Chris J. Oates, Mark Girolami

Published 2020-04-27Version 1

The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm for efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $\exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $\exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we only derive upper bounds, which are of the form $\exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.

Related articles: Most relevant | Search more
arXiv:2408.11389 [math.NA] (Published 2024-08-21)
On Quasi-Localized Dual Pairs in Reproducing Kernel Hilbert Spaces
arXiv:2407.06674 [math.NA] (Published 2024-07-09)
Almost-sure quasi-optimal approximation in reproducing kernel Hilbert spaces
arXiv:1811.05676 [math.NA] (Published 2018-11-14)
Worst-case error for unshifted lattice rules without randomisation