arXiv Analytics

Sign in

arXiv:1802.01062 [math.OC]AbstractReferencesReviewsResources

How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization

Frank E. Curtis, Daniel P. Robinson

Published 2018-02-04Version 1

A proposal is presented for how to characterize the worst-case performance of algorithms for solving smooth nonconvex optimization problems. Contemporary analyses characterize worst-case performance by providing, under certain broad assumptions on the objective function, an upper bound on the number of iterations (or function or derivative evaluations) required until a pth-order stationarity condition is approximately satisfied. However, this arguably leads to characterizations based on anomalous objective functions rather than on ones that are typically encountered in practice. By contrast, the proposal in this paper recommends to characterize worst-case performance over distinct regions of the search space. These regions are defined according to a partitioning of the search space based on whether or not regularized pth-order derivative models of the objective function predict a decrease proportional to an objective value error. This strategy can be adopted to characterize the behavior of algorithms when minimizing different classes of objective functions.

Related articles: Most relevant | Search more
arXiv:1011.3781 [math.OC] (Published 2010-11-16, updated 2010-12-22)
Sparse PCA: Convex Relaxations, Algorithms and Applications
arXiv:2106.07795 [math.OC] (Published 2021-06-14)
Interpretation of Plug-and-Play (PnP) algorithms from a different angle
arXiv:1609.07537 [math.OC] (Published 2016-09-23)
A Tutorial on Distributed (Non-Bayesian) Learning: Problem, Algorithms and Results