arXiv Analytics

Sign in

arXiv:2007.12219 [math.OC]AbstractReferencesReviewsResources

A First-Order Primal-Dual Method for Nonconvex Constrained Optimization Based On the Augmented Lagrangian

Daoli Zhu, Lei Zhao, Shuzhong Zhang

Published 2020-07-23Version 1

Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of o(1/\sqrt{k}), where k is the number of iterations. Moreover, under an additional error bound condition (to be called VP-EB in the paper), we further show that the convergence rate is in fact linear. Finally, we show that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.

Related articles: Most relevant | Search more
arXiv:1308.6774 [math.OC] (Published 2013-08-30)
Separable Approximations and Decomposition Methods for the Augmented Lagrangian
arXiv:1709.03384 [math.OC] (Published 2017-09-11)
Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity
arXiv:1802.03046 [math.OC] (Published 2018-02-08)
Existence of augmented Lagrange multipliers: reduction to exact penalty functions and localization principle