arXiv Analytics

Sign in

arXiv:1912.12004 [math.OC]AbstractReferencesReviewsResources

Nearly optimal first-order methods for convex optimization under gradient norm measure: An adaptive regularization approach

Masaru Ito, Mituhiro Fukuda

Published 2019-12-27Version 1

In the development of first-order methods for smooth (resp., composite) convex optimization problems minimizing smooth functions, the gradient (resp., gradient mapping) norm is a fundamental optimality measure for which a regularization technique of first-order methods is known to be nearly optimal. In this paper, we report an adaptive regularization approach attaining this iteration complexity without the prior knowledge of the distance from the initial point to the optimal solution set, which was required to be known in the existing regularization approach. To obtain further faster convergence adaptively, we secondly apply this approach to construct a first-order method that is adaptive to the H\"olderian error bound condition (or equivalently, the {\L}ojasiewicz gradient property) which covers moderately wide class of applications. The proposed method attains the nearly optimal iteration complexity with respect to the gradient mapping norm.

Related articles: Most relevant | Search more
arXiv:2004.14459 [math.OC] (Published 2020-04-29)
On the Asymptotic Behavior of the Douglas-Rachford and Proximal-Point Algorithms for Convex Optimization
arXiv:2105.08368 [math.OC] (Published 2021-05-18, updated 2022-08-18)
Convergence Rates of Gradient Methods for Convex Optimization in the Space of Measures
arXiv:1509.05647 [math.OC] (Published 2015-09-18)
Fast and Simple PCA via Convex Optimization