arXiv:1307.3365 [math.OC]AbstractReferencesReviewsResources
Markov games with frequent actions and incomplete information
Pierre Cardaliaguet, Catherine Rainer, Dinah Rosenberg, Nicolas Vieille
Published 2013-07-12Version 1
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponent's actions. We show the existence of a limit value as the time span between two consecutive stages vanishes; this value is characterized through an auxiliary optimization problem and as the solution of an Hamilton-Jacobi equation.
Categories: math.OC
Related articles: Most relevant | Search more
A blow-up phenomenon in the Hamilton-Jacobi equation in an unbounded domain
arXiv:math/0311269 [math.OC] (Published 2003-11-16)
Bounded-From-Below Solutions of the Hamilton-Jacobi Equation for Optimal Control Problems with Exit Times: Vanishing Lagrangians, Eikonal Equations, and Shape-From-Shading
arXiv:2106.09405 [math.OC] (Published 2021-06-17)
Mertens conjectures in absorbing games with incomplete information