arXiv Analytics

Sign in

arXiv:2404.10993 [math.OC]AbstractReferencesReviewsResources

A Proximal Gradient Method with an Explicit Line search for Multiobjective Optimization

Yunier Bello-Cruz, J. G. Melo, L. F. Prudente, R. V. G. Serra

Published 2024-04-17Version 1

We present a proximal gradient method for solving convex multiobjective optimization problems, where each objective function is the sum of two convex functions, with one assumed to be continuously differentiable. The algorithm incorporates a backtracking line search procedure that requires solving only one proximal subproblem per iteration, and is exclusively applied to the differentiable part of the objective functions. Under mild assumptions, we show that the sequence generated by the method convergences to a weakly Pareto optimal point of the problem. Additionally, we establish an iteration complexity bound by showing that the method finds an $\varepsilon$-approximate weakly Pareto point in at most ${\cal O}(1/\varepsilon)$ iterations. Numerical experiments illustrating the practical behavior of the method is presented.

Related articles: Most relevant | Search more
arXiv:1802.01062 [math.OC] (Published 2018-02-04)
How to Characterize the Worst-Case Performance of Algorithms for Nonconvex Optimization
arXiv:1102.1347 [math.OC] (Published 2011-02-07)
Universal derivative-free optimization method with quadratic convergence
arXiv:1501.06711 [math.OC] (Published 2015-01-27)
Decomposable Norm Minimization with Proximal-Gradient Homotopy Algorithm