arXiv Analytics

Sign in

arXiv:1202.6259 [math.OC]AbstractReferencesReviewsResources

A distance for probability spaces, and long-term values in Markov Decision Processes and Repeated Games

Jérôme Renault, Xavier Venel

Published 2012-02-28Version 1

Given a finite set $K$, we denote by $X=\Delta(K)$ the set of probabilities on $K$ and by $Z=\Delta_f(X)$ the set of Borel probabilities on $X$ with finite support. Studying a Markov Decision Process with partial information on $K$ naturally leads to a Markov Decision Process with full information on $X$. We introduce a new metric $d_*$ on $Z$ such that the transitions become 1-Lipschitz from $(X, \|.\|_1)$ to $(Z,d_*)$. In the first part of the article, we define and prove several properties of the metric $d_*$. Especially, $d_*$ satisfies a Kantorovich-Rubinstein type duality formula and can be characterized by using disintegrations. In the second part, we characterize the limit values in several classes of "compact non expansive" Markov Decision Processes. In particular we use the metric $d_*$ to characterize the limit value in Partial Observation MDP with finitely many states and in Repeated Games with an informed controller with finite sets of states and actions. Moreover in each case we can prove the existence of a generalized notion of uniform value where we consider not only the Ces\`aro mean when the number of stages is large enough but any evaluation function $\theta \in \Delta(\N^*)$ when the impatience $I(\theta)=\sum_{t\geq 1} |\theta_{t+1}-\theta_t|$ is small enough.

Related articles: Most relevant | Search more
arXiv:1808.04478 [math.OC] (Published 2018-08-13)
Risk Sensitive Multiple Goal Stochastic Optimization, with application to Risk Sensitive Partially Observed Markov Decision Processes
arXiv:1911.05578 [math.OC] (Published 2019-11-13)
Reachability and safety objectives in Markov decision processes on long but finite horizons
arXiv:2201.07908 [math.OC] (Published 2022-01-19)
Markov decision processes with observation costs