arXiv Analytics

Sign in

arXiv:1912.02365 [math.OC]AbstractReferencesReviewsResources

Lower Bounds for Non-Convex Stochastic Optimization

Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster, Nathan Srebro, Blake Woodworth

Published 2019-12-05Version 1

We lower bound the complexity of finding $\epsilon$-stationary points (with gradient norm at most $\epsilon$) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least $\epsilon^{-4}$ queries to find an $\epsilon$ stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of $\epsilon^{-3}$ queries, establishing the optimality of recently proposed variance reduction techniques.

Related articles: Most relevant | Search more
arXiv:1407.5144 [math.OC] (Published 2014-07-19, updated 2017-05-02)
Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory
arXiv:2209.09655 [math.OC] (Published 2022-09-20)
Lower Bounds on the Worst-Case Complexity of Efficient Global Optimization
arXiv:1106.1666 [math.OC] (Published 2011-06-08)
Lower bounds for polynomials using geometric programming