arXiv Analytics

Sign in

arXiv:1903.11280 [math.OC]AbstractReferencesReviewsResources

Decomposition of non-convex optimization via bi-level distributed ALADIN

Alexander Engelmann, Yuning Jiang, Boris Houska, Timm Faulwasser

Published 2019-03-27Version 1

Decentralized optimization algorithms are important in different contexts, such as distributed optimal power flow or distributed model predictive control, as they avoid central coordination and enable decomposition of large-scale problems. In case of constrained non-convex optimization only a few algorithms are currently are available; often their performance is limited, or they lack convergence guarantees. This paper proposes a framework for decentralized non-convex optimization via bi-level distribution of the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) algorithm. Bi-level distribution means that the outer ALADIN structure is combined with an inner distribution/decentralization level solving a condensed variant of ALADIN's convex coordination QP by decentralized algorithms. We prove sufficient conditions ensuring local convergence while allowing for inexact decentralized/distributed solutions of the coordination QP. Moreover, we show how a decentralized variant of conjugate gradient or decentralized ADMM schemes can be employed at the inner level. We draw upon case studies from power systems and robotics to illustrate the performance of the proposed framework.

Related articles: Most relevant | Search more
arXiv:2001.08356 [math.OC] (Published 2020-01-23)
Replica Exchange for Non-Convex Optimization
arXiv:1901.10682 [math.OC] (Published 2019-01-30)
On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Optimization
arXiv:1712.01033 [math.OC] (Published 2017-12-04)
NEON+: Accelerated Gradient Methods for Extracting Negative Curvature for Non-Convex Optimization