arXiv Analytics

Sign in

arXiv:2312.04027 [cs.LG]AbstractReferencesReviewsResources

The sample complexity of multi-distribution learning

Binghui Peng

Published 2023-12-07Version 1

Multi-distribution learning generalizes the classic PAC learning to handle data coming from multiple distributions. Given a set of $k$ data distributions and a hypothesis class of VC dimension $d$, the goal is to learn a hypothesis that minimizes the maximum population loss over $k$ distributions, up to $\epsilon$ additive error. In this paper, we settle the sample complexity of multi-distribution learning by giving an algorithm of sample complexity $\widetilde{O}((d+k)\epsilon^{-2}) \cdot (k/\epsilon)^{o(1)}$. This matches the lower bound up to sub-polynomial factor and resolves the COLT 2023 open problem of Awasthi, Haghtalab and Zhao [AHZ23].

Related articles: Most relevant | Search more
arXiv:1906.00264 [cs.LG] (Published 2019-06-01)
Graph-based Discriminators: Sample Complexity and Expressiveness
arXiv:1802.04350 [cs.LG] (Published 2018-02-12)
On the Sample Complexity of Learning from a Sequence of Experiments
arXiv:1206.6461 [cs.LG] (Published 2012-06-27)
On the Sample Complexity of Reinforcement Learning with a Generative Model