arXiv Analytics

Sign in

arXiv:1608.03339 [cs.LG]AbstractReferencesReviewsResources

Distributed Learning with Regularized Least Squares

Shaobo Lin, Xin Guo, Dingxuan Zhou

Published 2016-08-11Version 1

We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a data set into disjoint data subsets, applies the least squares regularization scheme to each data subset to produce an output function, and then takes an average of the individual output functions as a final global estimator or predictor. We show with error bounds in expectation in both the $L^2$-metric and RKHS-metric that the global output function of this distributed learning is a good approximation to the algorithm processing the whole data in one single machine. Our error bounds are sharp and stated in a general setting without any eigenfunction assumption. The analysis is achieved by a novel second order decomposition of operator differences in our integral operator approach. Even for the classical least squares regularization scheme in the RKHS associated with a general kernel, we give the best learning rate in the literature.

Related articles: Most relevant | Search more
arXiv:2501.15163 [cs.LG] (Published 2025-01-25)
Learning with Noisy Labels: the Exploration of Error Bounds in Classification
arXiv:2002.11187 [cs.LG] (Published 2020-02-25)
Reliable Estimation of Kullback-Leibler Divergence by Controlling Discriminator Complexity in the Reproducing Kernel Hilbert Space
arXiv:2002.04753 [cs.LG] (Published 2020-02-12)
A Random-Feature Based Newton Method for Empirical Risk Minimization in Reproducing Kernel Hilbert Space