arXiv Analytics

Sign in

arXiv:1103.4296 [math.OC]AbstractReferencesReviewsResources

Randomized Smoothing for Stochastic Optimization

John C. Duchi, Peter L. Bartlett, Martin J. Wainwright

Published 2011-03-22, updated 2012-04-07Version 2

We analyze convergence rates of stochastic optimization procedures for non-smooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for non-smooth optimization. We give several applications of our results to statistical estimation problems, and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.

Related articles: Most relevant | Search more
arXiv:2402.02461 [math.OC] (Published 2024-02-04, updated 2024-02-07)
Zeroth-order Median Clipping for Non-Smooth Convex Optimization Problems with Heavy-tailed Symmetric Noise
arXiv:1903.02117 [math.OC] (Published 2019-03-05)
Random minibatch projection algorithms for convex problems with functional constraints
arXiv:2007.01983 [math.OC] (Published 2020-07-04)
Primal-Dual Partial Inverse Splitting for Constrained Monotone Inclusions