arXiv Analytics

Sign in

arXiv:2011.12233 [math.OC]AbstractReferencesReviewsResources

Linear Convergence of Distributed Mirror Descent with Integral Feedback for Strongly Convex Problems

Youbang Sun, Shahin Shahrampour

Published 2020-11-24Version 1

Distributed optimization often requires finding the minimum of a global objective function written as a sum of local functions. A group of agents work collectively to minimize the global function. We study a continuous-time decentralized mirror descent algorithm that uses purely local gradient information to converge to the global optimal solution. The algorithm enforces consensus among agents using the idea of integral feedback. Recently, Sun and Shahrampour (2020) studied the asymptotic convergence of this algorithm for when the global function is strongly convex but local functions are convex. Using control theory tools, in this work, we prove that the algorithm indeed achieves (local) exponential convergence. We also provide a numerical experiment on a real data-set as a validation of the convergence speed of our algorithm.

Related articles: Most relevant | Search more
arXiv:2009.06747 [math.OC] (Published 2020-09-14)
Distributed Mirror Descent with Integral Feedback: Asymptotic Convergence Analysis of Continuous-time Dynamics
arXiv:2306.09694 [math.OC] (Published 2023-06-16)
Linear convergence of Nesterov-1983 with the strong convexity
arXiv:2410.14592 [math.OC] (Published 2024-10-18)
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach