arXiv Analytics

Sign in

arXiv:1802.04350 [cs.LG]AbstractReferencesReviewsResources

On the Sample Complexity of Learning from a Sequence of Experiments

Longyun Guo, Jean Honorio, John Morgan

Published 2018-02-12Version 1

We analyze the sample complexity of a new problem: learning from a sequence of experiments. In this problem, the learner should choose a hypothesis that performs well with respect to an infinite sequence of experiments, and their related data distributions. In practice, the learner can only perform m experiments with a total of N samples drawn from those data distributions. By using a Rademacher complexity approach, we show that the gap between the training and generation error is O((m/N)^0.5). We also provide some examples for linear prediction, two-layer neural networks and kernel methods.

Related articles: Most relevant | Search more
arXiv:2006.10350 [cs.LG] (Published 2020-06-18)
Kernel methods through the roof: handling billions of points efficiently
arXiv:1206.6461 [cs.LG] (Published 2012-06-27)
On the Sample Complexity of Reinforcement Learning with a Generative Model
arXiv:2406.06101 [cs.LG] (Published 2024-06-10)
On the Consistency of Kernel Methods with Dependent Observations