arXiv Analytics

Sign in

arXiv:2404.09438 [math.OC]AbstractReferencesReviewsResources

Developing Lagrangian-based Methods for Nonsmooth Nonconvex Optimization

Nachuan Xiao, Kuangyu Ding, Xiaoyin Hu, Kim-Chuan Toh

Published 2024-04-15Version 1

In this paper, we consider the minimization of a nonsmooth nonconvex objective function $f(x)$ over a closed convex subset $\mathcal{X}$ of $\mathbb{R}^n$, with additional nonsmooth nonconvex constraints $c(x) = 0$. We develop a unified framework for developing Lagrangian-based methods, which takes a single-step update to the primal variables by some subgradient methods in each iteration. These subgradient methods are ``embedded'' into our framework, in the sense that they are incorporated as black-box updates to the primal variables. We prove that our proposed framework inherits the global convergence guarantees from these embedded subgradient methods under mild conditions. In addition, we show that our framework can be extended to solve constrained optimization problems with expectation constraints. Based on the proposed framework, we show that a wide range of existing stochastic subgradient methods, including the proximal SGD, proximal momentum SGD, and proximal ADAM, can be embedded into Lagrangian-based methods. Preliminary numerical experiments on deep learning tasks illustrate that our proposed framework yields efficient variants of Lagrangian-based methods with convergence guarantees for nonconvex nonsmooth constrained optimization problems.

Related articles: Most relevant | Search more
arXiv:1803.03466 [math.OC] (Published 2018-03-09)
A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
arXiv:2409.10323 [math.OC] (Published 2024-09-16)
On the Hardness of Meaningful Local Guarantees in Nonsmooth Nonconvex Optimization
arXiv:1908.01086 [math.OC] (Published 2019-08-02)
Subgradient Methods for Risk-Sensitive Optimization