arXiv Analytics

Sign in

arXiv:1512.07866 [math.PR]AbstractReferencesReviewsResources

Bellman equation and viscosity solutions for mean-field stochastic control problem

Huyên Pham, Xiaoli Wei

Published 2015-12-24Version 1

We consider the stochastic optimal control problem of McKean-Vlasov stochastic differential equation. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to probability measures recently introduced by P.L. Lions in [30], and a special It{\^o} formula for flows of probability measures, we derive the (dynamic programming) Bellman equation for mean-field stochastic control problem. This Bellman equation in the Wassertein space of probability measures reduces to the classical finite dimensional partial differential equation in the case of no mean-field interaction. We prove a verification theorem in our McKean-Vlasov framework, and give explicit solutions to the Bellman equation for the linear quadratic mean-field control problem, with applications to the mean-variance portfolio selection and a systemic risk model. Finally, we consider a notion of lifted viscosity solutions for the Bellman equation, and show the viscosity property and uniqueness of the value function to the McKean-Vlasov control problem.

Related articles: Most relevant | Search more
arXiv:2210.02417 [math.PR] (Published 2022-10-05)
Probabilistic Representation of Viscosity Solutions to Quasi-Variational Inequalities with Non-Local Drivers
arXiv:1808.02332 [math.PR] (Published 2018-08-07)
Viscosity solutions to Hamilton-Jacobi-Bellman equations associated with sublinear Lévy(-type) processes
arXiv:1702.05921 [math.PR] (Published 2017-02-20)
A Mean-field Stochastic Control Problem with Partial Observations