arXiv Analytics

Sign in

arXiv:2409.00655 [math.OC]AbstractReferencesReviewsResources

The landscape of deterministic and stochastic optimal control problems: One-shot Optimization versus Dynamic Programming

Jihun Kim, Yuhao Ding, Yingjie Bi, Javad Lavaei

Published 2024-09-01Version 1

Optimal control problems can be solved via a one-shot (single) optimization or a sequence of optimization using dynamic programming (DP). However, the computation of their global optima often faces NP-hardness, and thus only locally optimal solutions may be obtained at best. In this work, we consider the discrete-time finite-horizon optimal control problem in both deterministic and stochastic cases and study the optimization landscapes associated with two different approaches: one-shot and DP. In the deterministic case, we prove that each local minimizer of the one-shot optimization corresponds to some control input induced by a locally minimum control policy of DP, and vice versa. However, with a parameterized policy approach, we prove that deterministic and stochastic cases both exhibit the desirable property that each local minimizer of DP corresponds to some local minimizer of the one-shot optimization, but the converse does not necessarily hold. Nonetheless, under different technical assumptions for deterministic and stochastic cases, if there exists only a single locally minimum control policy, one-shot and DP turn out to capture the same local solution. These results pave the way to understand the performance and stability of local search methods in optimal control.

Related articles: Most relevant | Search more
arXiv:2010.12266 [math.OC] (Published 2020-10-23)
Dynamic Programming in Topological Spaces
arXiv:2305.11272 [math.OC] (Published 2023-05-18)
Dissipativity in infinite horizon optimal control and dynamic programming
arXiv:2102.03172 [math.OC] (Published 2021-02-05)
Noether theorem in stochastic optimal control problems via contact symmetries