arXiv Analytics

Sign in

arXiv:1901.10348 [math.OC]AbstractReferencesReviewsResources

Stochastic Conditional Gradient Method for Composite Convex Minimization

Francesco Locatello, Alp Yurtsever, Olivier Fercoq, Volkan Cevher

Published 2019-01-29Version 1

In this paper, we propose the first practical algorithm to minimize stochastic composite optimization problems over compact convex sets. This template allows for affine constraints and therefore covers stochastic semidefinite programs (SDPs), which are vastly applicable in both machine learning and statistics. In this setup, stochastic algorithms with convergence guarantees are either not known or not tractable. We tackle this general problem and propose a convergent, easy to implement and tractable algorithm. We prove $\mathcal{O}(k^{-1/3})$ convergence rate in expectation on the objective residual and $\mathcal{O}(k^{-5/12})$ in expectation on the feasibility gap. These rates are achieved without increasing the batchsize, which can contain a single sample. We present extensive empirical evidence demonstrating the superiority of our algorithm on a broad range of applications including optimization of stochastic SDPs.

Related articles: Most relevant | Search more
arXiv:1502.01068 [math.OC] (Published 2015-02-04)
Composite convex minimization involving self-concordant-like cost functions
arXiv:2001.05537 [math.OC] (Published 2020-01-15)
Accelerated Dual-Averaging Primal-Dual Method for Composite Convex Minimization
arXiv:2003.01322 [math.OC] (Published 2020-03-03)
Randomized Primal-Dual Algorithms for Composite Convex Minimization with Faster Convergence Rates