arXiv Analytics

Sign in

arXiv:1703.00439 [math.OC]AbstractReferencesReviewsResources

Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization

Tomoya Murata, Taiji Suzuki

Published 2017-03-01Version 1

In this paper, we develop a new accelerated stochastic gradient method for efficiently solving the convex regularized empirical risk minimization problem in mini-batch settings. The use of mini-batches is becoming a golden standard in the machine learning community, because mini-batch settings stabilize the gradient estimate and can easily make good use of parallel computing. The core of our proposed method is the incorporation of our new "double acceleration" technique and variance reduction technique. We theoretically analyze our proposed method and show that our method much improves the mini-batch efficiencies of previous accelerated stochastic methods, and essentially only needs size $\sqrt{n}$ mini-batches for achieving the optimal iteration complexities for both non-strongly and strongly convex objectives, where $n$ is the training set size. Further, we show that even in non-mini-batch settings, our method surpasses the best known convergence rate for non-strongly convex objectives, and it achieves the one for strongly convex objectives.

Related articles: Most relevant | Search more
arXiv:1409.3257 [math.OC] (Published 2014-09-10)
Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
arXiv:1407.1296 [math.OC] (Published 2014-07-04)
An Accelerated Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization
arXiv:1709.03043 [math.OC] (Published 2017-09-10)
Distributed Block-diagonal Approximation Methods for Regularized Empirical Risk Minimization