arXiv Analytics

Sign in

arXiv:2106.11180 [math.OC]AbstractReferencesReviewsResources

Complexity-Free Generalization via Distributionally Robust Optimization

Henry Lam, Yibo Zeng

Published 2021-06-21Version 1

Established approaches to obtain generalization bounds in data-driven optimization and machine learning mostly build on solutions from empirical risk minimization (ERM), which depend crucially on the functional complexity of the hypothesis class. In this paper, we present an alternate route to obtain these bounds on the solution from distributionally robust optimization (DRO), a recent data-driven optimization framework based on worst-case analysis and the notion of ambiguity set to capture statistical uncertainty. In contrast to the hypothesis class complexity in ERM, our DRO bounds depend on the ambiguity set geometry and its compatibility with the true loss function. Notably, when using maximum mean discrepancy as a DRO distance metric, our analysis implies, to the best of our knowledge, the first generalization bound in the literature that depends solely on the true loss function, entirely free of any complexity measures or bounds on the hypothesis class.

Related articles: Most relevant | Search more
arXiv:2010.05893 [math.OC] (Published 2020-10-12)
Large-Scale Methods for Distributionally Robust Optimization
arXiv:2411.02549 [math.OC] (Published 2024-11-04)
Distributionally Robust Optimization
arXiv:2212.01518 [math.OC] (Published 2022-12-03)
Hedging against Complexity: Distributionally Robust Optimization with Parametric Approximation