arXiv Analytics

Sign in

arXiv:2206.13269 [stat.ML]AbstractReferencesReviewsResources

The Performance of Wasserstein Distributionally Robust M-Estimators in High Dimensions

Liviu Aolaritei, Soroosh Shafieezadeh-Abadeh, Florian Dörfler

Published 2022-06-27Version 1

Wasserstein distributionally robust optimization has recently emerged as a powerful framework for robust estimation, enjoying good out-of-sample performance guarantees, well-understood regularization effects, and computationally tractable dual reformulations. In such framework, the estimator is obtained by minimizing the worst-case expected loss over all probability distributions which are close, in a Wasserstein sense, to the empirical distribution. In this paper, we propose a Wasserstein distributionally robust M-estimation framework to estimate an unknown parameter from noisy linear measurements, and we focus on the important and challenging task of analyzing the squared error performance of such estimators. Our study is carried out in the modern high-dimensional proportional regime, where both the ambient dimension and the number of samples go to infinity, at a proportional rate which encodes the under/over-parametrization of the problem. Under an isotropic Gaussian features assumption, we show that the squared error can be recover as the solution of a convex-concave optimization problem which, surprinsingly, involves at most four scalar variables. To the best of our knowledge, this is the first work to study this problem in the context of Wasserstein distributionally robust M-estimation.

Related articles: Most relevant | Search more
arXiv:1111.5648 [stat.ML] (Published 2011-11-23)
Falsification and future performance
arXiv:2410.16449 [stat.ML] (Published 2024-10-21)
Robust Feature Learning for Multi-Index Models in High Dimensions
arXiv:1610.05604 [stat.ML] (Published 2016-10-18)
Dynamic Assortment Personalization in High Dimensions