arXiv Analytics

Sign in

arXiv:2210.09665 [math.OC]AbstractReferencesReviewsResources

On convergence of a $q$-random coordinate constrained algorithm for non-convex problems

Alireza Ghaffari-Hadigheh, Lennart Sinjorgo, Renata Sotirov

Published 2022-10-18Version 1

We propose a random coordinate descent algorithm for optimizing a non-convex objective function subject to one linear constraint and simple bounds on the variables. Although it is common use to update only two random coordinates simultaneously in each iteration of a coordinate descent algorithm, our algorithm allows updating arbitrary number of coordinates. We provide a proof of convergence of the algorithm. The convergence rate of the algorithm improves when we update more coordinates per iteration. Numerical experiments on large scale instances of different optimization problems show the benefit of updating many coordinates simultaneously.

Related articles: Most relevant | Search more
arXiv:0803.2211 [math.OC] (Published 2008-03-14, updated 2010-05-09)
On Conditions for Convergence to Consensus
arXiv:1801.08691 [math.OC] (Published 2018-01-26)
On Quasi-Newton Forward--Backward Splitting: Proximal Calculus and Convergence
arXiv:math/0212255 [math.OC] (Published 2002-12-18)
The Convergence of the Extended Kalman Filter