arXiv Analytics

Sign in

arXiv:2505.07143 [math.OC]AbstractReferencesReviewsResources

Subgradient Regularization: A Descent-Oriented Subgradient Method for Nonsmooth Optimization

Hanyang Li, Ying Cui

Published 2025-05-11Version 1

In nonsmooth optimization, a negative subgradient is not necessarily a descent direction, making the design of convergent descent methods based on zeroth-order and first-order information a challenging task. The well-studied bundle methods and gradient sampling algorithms construct descent directions by aggregating subgradients at nearby points in seemingly different ways, and are often complicated or lack deterministic guarantees. In this work, we identify a unifying principle behind these approaches, and develop a general framework of descent methods under the abstract principle that provably converge to stationary points. Within this framework, we introduce a simple yet effective technique, called subgradient regularization, to generate stable descent directions for a broad class of nonsmooth marginal functions, including finite maxima or minima of smooth functions. When applied to the composition of a convex function with a smooth map, the method naturally recovers the prox-linear method and, as a byproduct, provides a new dual interpretation of this classical algorithm. Numerical experiments demonstrate the effectiveness of our methods on several challenging classes of nonsmooth optimization problems, including the minimization of Nesterov's nonsmooth Chebyshev-Rosenbrock function.

Related articles: Most relevant | Search more
arXiv:1610.03446 [math.OC] (Published 2016-10-11)
Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
arXiv:2407.12984 [math.OC] (Published 2024-07-17)
Nonlinear tomographic reconstruction via nonsmooth optimization
arXiv:2407.02146 [math.OC] (Published 2024-07-02)
Coderivative-Based Newton Methods with Wolfe Linesearch for Nonsmooth Optimization