arXiv:1912.10697 [math.OC]AbstractReferencesReviewsResources
Hamilton-Jacobi-Bellman Equations for Q-Learning in Continuous Time
Published 2019-12-23Version 1
In this paper, we introduce Hamilton-Jacobi-Bellman (HJB) equations for Q-functions in continuous time optimal control problems with Lipschitz continuous controls. The standard Q-function used in reinforcement learning is shown to be the unique viscosity solution of the HJB equation. A necessary and sufficient condition for optimality is provided using the viscosity solution framework. By using the HJB equation, we develop a Q-learning method for continuous-time dynamical systems. A DQN-like algorithm is also proposed for high-dimensional state and control spaces. The performance of the proposed Q-learning algorithm is demonstrated using 1-, 10- and 20-dimensional dynamical systems.
Related articles: Most relevant | Search more
arXiv:1505.06567 [math.OC] (Published 2015-05-25)
On nouniqueness of solutions of Hamilton-Jacobi-Bellman equations
arXiv:2009.13097 [math.OC] (Published 2020-09-28)
Hamilton-Jacobi-Bellman Equations for Maximum Entropy Optimal Control
arXiv:0907.1603 [math.OC] (Published 2009-07-09)
HJB Equations for the Optimal Control of Differential Equations with Delays and State Constraints, II: Optimal Feedbacks and Approximations