arXiv Analytics

Sign in

arXiv:2103.15569 [cs.LG]AbstractReferencesReviewsResources

Risk Bounds for Learning via Hilbert Coresets

Spencer Douglas, Piyush Kumar, R. K. Prasanth

Published 2021-03-29Version 1

We develop a formalism for constructing stochastic upper bounds on the expected full sample risk for supervised classification tasks via the Hilbert coresets approach within a transductive framework. We explicitly compute tight and meaningful bounds for complex datasets and complex hypothesis classes such as state-of-the-art deep neural network architectures. The bounds we develop exhibit nice properties: i) the bounds are non-uniform in the hypothesis space, ii) in many practical examples, the bounds become effectively deterministic by appropriate choice of prior and training data-dependent posterior distributions on the hypothesis space, and iii) the bounds become significantly better with increase in the size of the training set. We also lay out some ideas to explore for future research.

Related articles: Most relevant | Search more
arXiv:1910.13886 [cs.LG] (Published 2019-10-30)
Risk bounds for reservoir computing
arXiv:2108.08887 [cs.LG] (Published 2021-08-19)
Risk Bounds and Calibration for a Smart Predict-then-Optimize Method
arXiv:2502.02921 [cs.LG] (Published 2025-02-05)
Robust Reward Alignment in Hypothesis Space