arXiv Analytics

Sign in

arXiv:2008.07513 [cs.LG]AbstractReferencesReviewsResources

A Realistic Example in 2 Dimension that Gradient Descent Takes Exponential Time to Escape Saddle Points

Shiliang Zuo

Published 2020-08-17Version 1

Gradient descent is a popular algorithm in optimization, and its performance in convex settings is mostly well understood. In non-convex settings, it has been shown that gradient descent is able to escape saddle points asymptotically and converge to local minimizers [Lee et. al. 2016]. Recent studies also show a perturbed version of gradient descent is enough to escape saddle points efficiently [Jin et. al. 2015, Ge et. al. 2017]. In this paper we show a negative result: gradient descent may take exponential time to escape saddle points, with non-pathological two dimensional functions. While our focus is theoretical, we also conduct experiments verifying our theoretical result. Through our analysis we demonstrate that stochasticity is essential to escape saddle points efficiently.

Related articles: Most relevant | Search more
arXiv:1703.00887 [cs.LG] (Published 2017-03-02)
How to Escape Saddle Points Efficiently
arXiv:1905.05843 [cs.LG] (Published 2019-05-14)
Task-Driven Data Verification via Gradient Descent
arXiv:1806.00468 [cs.LG] (Published 2018-06-01)
Implicit Bias of Gradient Descent on Linear Convolutional Networks