arXiv Analytics

Sign in

arXiv:2001.03707 [math.OC]AbstractReferencesReviewsResources

Superconvergence of Online Optimization for Model Predictive Control

Sen Na, Mihai Anitescu

Published 2020-01-11Version 1

We develop a one-Newton-step-per-horizon, online, lag-$L$, model predictive control (MPC) algorithm for solving discrete-time, equality-constrained, nonlinear dynamic programs. Based on recent sensitivity analysis results for the target problems class, we prove that the approach exhibits a behavior that we call superconvergence; that is, the tracking error with respect to the full horizon solution is not only stable for successive horizon shifts, but also decreases with increasing shift order to a minimum value that decays exponentially in the length of the receding horizon. The key analytical step is the decomposition of the one-step error recursion of our algorithm into algorithmic error and perturbation error. We show that the perturbation error decays exponentially with the lag between two consecutive receding horizons, while the algorithmic error, determined by Newton's method, achieves quadratic convergence instead. Overall this approach induces our local exponential convergence result in terms of the receding horizon length for suitable values of $L$. Numerical experiments validate our theoretical findings.

Related articles: Most relevant | Search more
arXiv:2411.19056 [math.OC] (Published 2024-11-28)
Stochastic models for online optimization
arXiv:2103.12681 [math.OC] (Published 2021-03-23)
A Distributed Active Set Method for Model Predictive Control
arXiv:2007.07062 [math.OC] (Published 2020-07-14)
Hidden invexity in model predictive control