arXiv Analytics

Sign in

arXiv:1810.04059 [math.OC]AbstractReferencesReviewsResources

Dynamic Optimization with Convergence Guarantees

Martin P. Neuenhofen, Eric C. Kerrigan

Published 2018-10-09Version 1

We present a novel direct transcription method to solve optimization problems subject to nonlinear differential and inequality constraints. In order to provide numerical convergence guarantees, it is sufficient for the functions that define the problem to satisfy boundedness and Lipschitz conditions. Our assumptions are the most general to date; we do not require uniqueness, differentiability or constraint qualifications to hold and we avoid the use of Lagrange multipliers. Our approach differs fundamentally from state-of-the-art methods based on collocation. We follow a least-squares approach to finding approximate solutions to the differential equations. The objective is augmented with the integral of a quadratic penalty on the differential equation residual and a logarithmic barrier for the inequality constraints, as well as a quadratic penalty on the point constraint residual. The resulting unconstrained infinite-dimensional optimization problem is discretized using finite elements, while integrals are replaced by quadrature approximations if they cannot be evaluated analytically. Order of convergence results are derived, even if components of solutions are discontinuous.

Related articles: Most relevant | Search more
arXiv:1111.3271 [math.OC] (Published 2011-11-14)
On Bellman's principle with inequality constraints
arXiv:2408.03034 [math.OC] (Published 2024-08-06)
A Course in Dynamic Optimization
arXiv:2312.07465 [math.OC] (Published 2023-12-12)
Subgradient methods with variants of Polyak stpsize for quasi-convex optimization with inequality constraints for analogues of sharp minima