arXiv Analytics

Sign in

arXiv:1005.2296 [cs.LG]AbstractReferencesReviewsResources

Online Learning of Noisy Data with Kernels

Nicolò Cesa-Bianchi, Shai Shalev-Shwartz, Ohad Shamir

Published 2010-05-13, updated 2010-05-20Version 2

We study online learning when individual instances are corrupted by adversarially chosen random noise. We assume the noise distribution is unknown, and may change over time with no restriction other than having zero mean and bounded variance. Our technique relies on a family of unbiased estimators for non-linear functions, which may be of independent interest. We show that a variant of online gradient descent can learn functions in any dot-product (e.g., polynomial) or Gaussian kernel space with any analytic convex loss function. Our variant uses randomized estimates that need to query a random number of noisy copies of each instance, where with high probability this number is upper bounded by a constant. Allowing such multiple queries cannot be avoided: Indeed, we show that online learning is in general impossible when only one noisy copy of each instance can be accessed.

Comments: This is a full version of the paper appearing in the 23rd International Conference on Learning Theory (COLT 2010)
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:1403.6863 [cs.LG] (Published 2014-03-26)
Online Learning of k-CNF Boolean Functions
arXiv:1810.01920 [cs.LG] (Published 2018-10-03)
Generalized Inverse Optimization through Online Learning
arXiv:1802.02871 [cs.LG] (Published 2018-02-08)
Online Learning: A Comprehensive Survey