{ "id": "1610.03446", "version": "v1", "published": "2016-10-11T17:57:27.000Z", "updated": "2016-10-11T17:57:27.000Z", "title": "Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria", "authors": [ "Dmitriy Drusvyatskiy", "Alexander D. Ioffe", "Adrian S. Lewis" ], "comment": "23 pages", "categories": [ "math.OC" ], "abstract": "We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss-Newton type for minimizing the composition of a convex function and a smooth map are common examples. Our main result is an explicit relationship between the step-size of any such algorithm and the slope of the function at a nearby point. Consequently, we (1) show that the step-sizes can be reliably used to terminate the algorithm, (2) prove that as long as the step-sizes tend to zero, every limit point of the iterates is stationary, and (3) show that conditions, akin to classical quadratic growth, imply that the step-sizes linearly bound the distance of the iterates to the solution set. The latter so-called error bound property is typically used to establish linear (or faster) convergence guarantees. Analogous results hold when the step-size is replaced by the square root of the decrease in the model's value. We complete the paper with extensions to when the models are minimized only inexactly.", "revisions": [ { "version": "v1", "updated": "2016-10-11T17:57:27.000Z" } ], "analyses": { "subjects": [ "65K05", "90C30", "49M37", "65K10" ], "keywords": [ "nonsmooth optimization", "termination criteria", "minimize simple taylor-like models", "convergence", "error bound property" ], "note": { "typesetting": "TeX", "pages": 23, "language": "en", "license": "arXiv", "status": "editable" } } }