arXiv Analytics

Sign in

arXiv:2305.17170 [stat.ML]AbstractReferencesReviewsResources

Error Bounds for Learning with Vector-Valued Random Features

Samuel Lanthaler, Nicholas H. Nelsen

Published 2023-05-26Version 1

This paper provides a comprehensive error analysis of learning with vector-valued random features (RF). The theory is developed for RF ridge regression in a fully general infinite-dimensional input-output setting, but nonetheless applies to and improves existing finite-dimensional analyses. In contrast to comparable work in the literature, the approach proposed here relies on a direct analysis of the underlying risk functional and completely avoids the explicit RF ridge regression solution formula in terms of random matrices. This removes the need for concentration results in random matrix theory or their generalizations to random operators. The main results established in this paper include strong consistency of vector-valued RF estimators under model misspecification and minimax optimal convergence rates in the well-specified setting. The parameter complexity (number of random features) and sample complexity (number of labeled data) required to achieve such rates are comparable with Monte Carlo intuition and free from logarithmic factors.

Related articles: Most relevant | Search more
arXiv:1504.00052 [stat.ML] (Published 2015-03-31)
Improved Error Bounds Based on Worst Likely Assignments
arXiv:2408.09004 [stat.ML] (Published 2024-08-16)
Error Bounds for Learning Fourier Linear Operators
arXiv:2407.14175 [stat.ML] (Published 2024-07-19)
On Policy Evaluation Algorithms in Distributional Reinforcement Learning