arXiv Analytics

Sign in

arXiv:1912.07883 [math.OC]AbstractReferencesReviewsResources

Mean-field Markov decision processes with common noise and open-loop controls

Médéric Motte, Huyên Pham

Published 2019-12-17Version 1

We develop an exhaustive study of Markov decision process (MDP) under mean field interaction both on states and actions in the presence of common noise, and when optimization is performed over open-loop controls on infinite horizon. Such model, called CMKV-MDP for conditional McKean-Vlasov MDP, arises and is obtained here rigorously with a rate of convergence as the asymptotic problem of N-cooperative agents controlled by a social planner/influencer that observes the environment noises but not necessarily the individual states of the agents. We highlight the crucial role of relaxed controls and randomization hypothesis for this class of models with respect to classical MDP theory. We prove the correspondence between CMKV-MDP and a general lifted MDP on the space of probability measures, and establish the dynamic programming Bellman fixed point equation satisfied by the value function, as well as the existence of-optimal randomized feedback controls. The arguments of proof involve an original measurable optimal coupling for the Wasserstein distance. This provides a procedure for learning strategies in a large population of interacting collaborative agents. MSC Classification: 90C40, 49L20.

Related articles: Most relevant | Search more
arXiv:2204.01185 [math.OC] (Published 2022-04-03)
Wasserstein Hamiltonian flow with common noise on graph
arXiv:1708.06035 [math.OC] (Published 2017-08-20)
Quantile-based Mean-Field Games with Common Noise
arXiv:2207.12738 [math.OC] (Published 2022-07-26)
Quantitative propagation of chaos for mean field Markov decision process with common noise