arXiv:1706.05341 [math.OC]AbstractReferencesReviewsResources
Taylor Expansions of the Value Function Associated with a Bilinear Optimal Control Problem
Tobias Breiten, Karl Kunisch, Laurent Pfeiffer
Published 2017-06-16Version 1
A general bilinear optimal control problem subject to an infinite-dimensional state equation is considered. Polynomial approximations of the associated value function are derived around the steady state by repeated formal differentiation of the Hamilton-Jacobi-Bellman equation. The terms of the approximations are described by multilinear forms, which can be obtained as solutions to generalized Lyapunov equations with recursively defined right-hand sides. They form the basis for defining a suboptimal feedback law. The approximation properties of this feedback law are investigated. An application to the optimal control of a Fokker-Planck equation is also provided.