arXiv Analytics

Sign in

arXiv:2010.13893 [math.OC]AbstractReferencesReviewsResources

General higher-order majorization-minimization algorithms for (non)convex optimization

Ion Necoara, Daniela Lupu

Published 2020-10-26Version 1

Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function so that along the iterations the objective function decreases. Such a simple principle allows to solve a large class of optimization problems, convex or nonconvex, smooth or nonsmooth. We propose a general higher-order majorization-minimization algorithm for minimizing an objective function that admits an approximation (surrogate) such that the corresponding error function has a higher-order Lipschitz continuous derivative. We present convergence guarantees for our new method for general optimization problems with convex or nonconvex, smooth or nonsmooth objective function. For convex (possibly nonsmooth) problems we provide global sublinear convergence rates, while for problems with uniformly convex objective function we obtain locally faster convergence rates, that is superlinear. We also prove global asymptotic stationary point guarantees for general nonconvex problems and under Kurdyka-Lojasiewicz property of the objective function we derive local convergence rates ranging from sublinear to superlinear for our majorization-minimization algorithm. Moreover, for composite (unconstrained) nonconvex problems we derive convergence rates in terms of first- (second)-order optimality conditions.

Related articles: Most relevant | Search more
arXiv:2206.08627 [math.OC] (Published 2022-06-17)
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
arXiv:2004.14459 [math.OC] (Published 2020-04-29)
On the Asymptotic Behavior of the Douglas-Rachford and Proximal-Point Algorithms for Convex Optimization
arXiv:1405.4980 [math.OC] (Published 2014-05-20, updated 2015-11-16)
Convex Optimization: Algorithms and Complexity