arXiv Analytics

Sign in

arXiv:1610.03446 [math.OC]AbstractReferencesReviewsResources

Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria

Dmitriy Drusvyatskiy, Alexander D. Ioffe, Adrian S. Lewis

Published 2016-10-11Version 1

We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a nearby point. Consequently, we (1) show that the step-sizes can be reliably used to terminate the algorithm, (2) prove that as long as the step-sizes tend to zero, every limit point of the iterates is stationary, and (3) show that conditions, akin to classical quadratic growth, imply that the step-sizes linearly bound the distance of the iterates to the solution set. The latter so-called error bound property is typically used to establish linear (or faster) convergence guarantees. Analogous results hold when the step-size is replaced by the square root of the decrease in the model's value. We complete the paper with extensions to when the models are minimized only inexactly.

Related articles: Most relevant | Search more
arXiv:1801.08691 [math.OC] (Published 2018-01-26)
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
arXiv:1805.06445 [math.OC] (Published 2018-05-16)
On the Convergence of the SINDy Algorithm
arXiv:math/0212255 [math.OC] (Published 2002-12-18)
The Convergence of the Extended Kalman Filter