arXiv:2010.13893 [math.OC]AbstractReferencesReviewsResources
General higher-order majorization-minimization algorithms for (non)convex optimization
Published 2020-10-26Version 1
Majorization-minimization algorithms consist of successively minimizing a sequence of upper bounds of the objective function so that along the iterations the objective function decreases. Such a simple principle allows to solve a large class of optimization problems, convex or nonconvex, smooth or nonsmooth. We propose a general higher-order majorization-minimization algorithm for minimizing an objective function that admits an approximation (surrogate) such that the corresponding error function has a higher-order Lipschitz continuous derivative. We present convergence guarantees for our new method for general optimization problems with convex or nonconvex, smooth or nonsmooth objective function. For convex (possibly nonsmooth) problems we provide global sublinear convergence rates, while for problems with uniformly convex objective function we obtain locally faster convergence rates, that is superlinear. We also prove global asymptotic stationary point guarantees for general nonconvex problems and under Kurdyka-Lojasiewicz property of the objective function we derive local convergence rates ranging from sublinear to superlinear for our majorization-minimization algorithm. Moreover, for composite (unconstrained) nonconvex problems we derive convergence rates in terms of first- (second)-order optimality conditions.