arXiv Analytics

Sign in

arXiv:2102.04259 [stat.ML]AbstractReferencesReviewsResources

Concentration of Non-Isotropic Random Tensors with Applications to Learning and Empirical Risk Minimization

Mathieu Even, Laurent Massoulié

Published 2021-02-04Version 1

Dimension is an inherent bottleneck to some modern learning tasks, where optimization methods suffer from the size of the data. In this paper, we study non-isotropic distributions of data and develop tools that aim at reducing these dimensional costs by a dependency on an effective dimension rather than the ambient one. Based on non-asymptotic estimates of the metric entropy of ellipsoids -- that prove to generalize to infinite dimensions -- and on a chaining argument, our uniform concentration bounds involve an effective dimension instead of the global dimension, improving over existing results. We show the importance of taking advantage of non-isotropic properties in learning problems with the following applications: i) we improve state-of-the-art results in statistical preconditioning for communication-efficient distributed optimization, ii) we introduce a non-isotropic randomized smoothing for non-smooth optimization. Both applications cover a class of functions that encompasses empirical risk minization (ERM) for linear models.

Related articles: Most relevant | Search more
arXiv:1209.3079 [stat.ML] (Published 2012-09-14)
Signal Recovery in Unions of Subspaces with Applications to Compressive Imaging
arXiv:1406.0067 [stat.ML] (Published 2014-05-31, updated 2015-05-10)
Optimization via Low-rank Approximation for Community Detection in Networks
arXiv:1904.08548 [stat.ML] (Published 2019-04-18)
A New Class of Time Dependent Latent Factor Models with Applications