arXiv Analytics

Sign in

arXiv:2003.11947 [math.NA]AbstractReferencesReviewsResources

On the worst-case error of least squares algorithms for $L_2$-approximation with high probability

Mario Ullrich

Published 2020-03-25Version 1

It was recently shown in [4] that, for $L_2$-approximation of functions from a Hilbert space, function values are almost as powerful as arbitrary linear information, if the approximation numbers are square-summable. That is, we showed that \[ e_n \,\lesssim\, \sqrt{\frac{1}{k_n} \sum_{j\geq k_n} a_j^2} \qquad \text{ with }\quad k_n \asymp \frac{n}{\ln(n)}, \] where $e_n$ are the sampling numbers and $a_k$ are the approximation numbers. In particular, if $(a_k)\in\ell_2$, then $e_n$ and $a_n$ are of the same polynomial order. For this, we presented an explicit (weighted least squares) algorithm based on i.i.d. random points and proved that this works with positive probability. This implies the existence of a good deterministic sampling algorithm. Here, we present a modification of the proof in [4] that shows that the same algorithm works with probability at least $1-{n^{-c}}$ for all $c>0$.

Comments: 7 pages. arXiv admin note: substantial text overlap with arXiv:1905.02516
Categories: math.NA, cs.NA
Subjects: 41A25, 41A46, 60B20
Related articles: Most relevant | Search more
arXiv:1905.02516 [math.NA] (Published 2019-05-07)
On $L_2$-approximation in Hilbert spaces using function values
arXiv:1811.05676 [math.NA] (Published 2018-11-14)
Worst-case error for unshifted lattice rules without randomisation
arXiv:1705.04567 [math.NA] (Published 2017-05-12)
Optimal Monte Carlo Methods for $L^2$-Approximation