arXiv Analytics

Sign in

arXiv:1706.05341 [math.OC]AbstractReferencesReviewsResources

Taylor Expansions of the Value Function Associated with a Bilinear Optimal Control Problem

Tobias Breiten, Karl Kunisch, Laurent Pfeiffer

Published 2017-06-16Version 1

A general bilinear optimal control problem subject to an infinite-dimensional state equation is considered. Polynomial approximations of the associated value function are derived around the steady state by repeated formal differentiation of the Hamilton-Jacobi-Bellman equation. The terms of the approximations are described by multilinear forms, which can be obtained as solutions to generalized Lyapunov equations with recursively defined right-hand sides. They form the basis for defining a suboptimal feedback law. The approximation properties of this feedback law are investigated. An application to the optimal control of a Fokker-Planck equation is also provided.

Related articles: Most relevant | Search more
arXiv:1211.3724 [math.OC] (Published 2012-11-15, updated 2013-05-23)
Variational properties of value functions
arXiv:1804.05011 [math.OC] (Published 2018-04-13)
On the Taylor Expansion of Value Functions
arXiv:1705.03257 [math.OC] (Published 2017-05-09)
Optimality conditions and local regularity of the value function for the optimal exit time problem