arXiv Analytics

Sign in

arXiv:1707.00577 [stat.ML]AbstractReferencesReviewsResources

Generalization Properties of Doubly Online Learning Algorithms

Junhong Lin, Lorenzo Rosasco

Published 2017-07-03Version 1

Doubly online learning algorithms are scalable kernel methods that perform very well in practice. However, their generalization properties are not well understood and their analysis is challenging since the corresponding learning sequence may not be in the hypothesis space induced by the kernel. In this paper, we provide an in-depth theoretical analysis for different variants of doubly online learning algorithms within the setting of nonparametric regression in a reproducing kernel Hilbert space and considering the square loss. Particularly, we derive convergence results on the generalization error for the studied algorithms either with or without an explicit penalty term. To the best of our knowledge, the derived results for the unregularized variants are the first of this kind, while the results for the regularized variants improve those in the literature. The novelties in our proof are a sample error bound that requires controlling the trace norm of a cumulative operator, and a refined analysis of bounding initial error.

Related articles: Most relevant | Search more
arXiv:2402.04613 [stat.ML] (Published 2024-02-07)
Wasserstein Gradient Flows for Moreau Envelopes of f-Divergences in Reproducing Kernel Hilbert Spaces
arXiv:2106.08443 [stat.ML] (Published 2021-06-15)
Reproducing Kernel Hilbert Space, Mercer's Theorem, Eigenfunctions, Nyström Method, and Use of Kernels in Machine Learning: Tutorial and Survey
arXiv:2107.01473 [stat.ML] (Published 2021-07-03)
Slope and generalization properties of neural networks