arXiv Analytics

Sign in

arXiv:1212.2170 [math.PR]AbstractReferencesReviewsResources

Stochastic Perron's method for Hamilton-Jacobi-Bellman equations

Erhan Bayraktar, Mihai Sirbu

Published 2012-12-10, updated 2013-09-24Version 4

We show that the value function of a stochastic control problem is the unique solution of the associated Hamilton-Jacobi-Bellman (HJB) equation, completely avoiding the proof of the so-called dynamic programming principle (DPP). Using Stochastic Perron's method we construct a super-solution lying below the value function and a sub-solution dominating it. A comparison argument easily closes the proof. The program has the precise meaning of verification for viscosity-solutions, obtaining the DPP as a conclusion. It also immediately follows that the weak and strong formulations of the stochastic control problem have the same value. Using this method we also capture the possible face-lifting phenomenon in a straightforward manner.

Comments: Final version. To appear in the SIAM Journal on Control and Optimization. Keywords: Perron's method, viscosity solutions, non-smooth verification, comparison principle
Categories: math.PR, cs.SY, math.AP, math.OC
Subjects: 49L20, 49L25, 60G46, 60H30, 35Q93, 35D40
Related articles: Most relevant | Search more
arXiv:1812.04564 [math.PR] (Published 2018-12-11)
Global $C^1$ Regularity of the Value Function in Optimal Stopping Problems
arXiv:2002.01084 [math.PR] (Published 2020-02-04)
On the analyticity of the value function in optimal investment
arXiv:1205.0925 [math.PR] (Published 2012-05-04)
Controlled stochastic networks in heavy traffic: Convergence of value functions