arXiv:1709.01167 [math.OC]AbstractReferencesReviewsResources
Maximum principle for a deterministic optimal control problem under state constraints: A limit approach
Published 2017-09-01Version 1
An optimal control problem driven by an ordinary differential equation under state constraints is considered in this study. Different from the classical maximum principle under state constraints, from an operational point of view, we first introduce a discrete state constraints optimal control problem and prove its maximum principle. Then, we prove that the discrete state constraints optimal control problem is a near-optimal control problem of the original problem. Furthermore, we show that the optimal solution of the near-optimal control problem converges to the optimal solution of the original problem. Finally, we use a linear quadratic optimal problem to verify the main results of this study.
Comments: 25. arXiv admin note: text overlap with arXiv:1610.05843
Categories: math.OC
Related articles: Most relevant | Search more
arXiv:1611.04291 [math.OC] (Published 2016-11-14)
Maximum Principle for Partial Observed Zero-Sum Stochastic Differential Game of Mean-Field SDEs
arXiv:1901.03794 [math.OC] (Published 2019-01-12)
Analyzing a Maximum Principle for Finite Horizon State Constrained Problems via Parametric Examples. Part 1: Problems with Unilateral State Constraints
arXiv:1901.09718 [math.OC] (Published 2019-01-25)
Analyzing a Maximum Principle for Finite Horizon State Constrained Problems via Parametric Examples. Part 2: Problems with Bilateral State Constraints