arXiv Analytics

Sign in

arXiv:1812.04300 [math.PR]AbstractReferencesReviewsResources

Deep neural networks algorithms for stochastic control problems on finite horizon, part I: convergence analysis

Côme Huré, Huyên Pham, Achref Bachouch, Nicolas Langrené

Published 2018-12-11Version 1

This paper develops algorithms for high-dimensional stochastic control problems based on deep learning and dynamic programming (DP). Differently from the classical approximate DP approach, we first approximate the optimal policy by means of neural networks in the spirit of deep reinforcement learning, and then the value function by Monte Carlo regression. This is achieved in the DP recursion by performance or hybrid iteration, and regress now or later/quantization methods from numerical probabilities. We provide a theoretical justification of these algorithms. Consistency and rate of convergence for the control and value function estimates are analyzed and expressed in terms of the universal approximation error of the neural networks. Numerical results on various applications are presented in a companion paper [2] and illustrate the performance of our algorithms.

Related articles: Most relevant | Search more
arXiv:1609.01655 [math.PR] (Published 2016-09-06)
The dividend problem with a finite horizon
arXiv:2002.01084 [math.PR] (Published 2020-02-04)
On the analyticity of the value function in optimal investment
arXiv:1812.04564 [math.PR] (Published 2018-12-11)
Global $C^1$ Regularity of the Value Function in Optimal Stopping Problems