arXiv Analytics

Sign in

arXiv:1911.05578 [math.OC]AbstractReferencesReviewsResources

Reachability and safety objectives in Markov decision processes on long but finite horizons

Galit Ashkenazi-Golan, János Flesch, Arkadi Predtetchinski, Eilon Solan

Published 2019-11-13Version 1

We consider discrete-time Markov decision processes in which the decision maker is interested in long but finite horizons. First we consider reachability objective: the decision maker's goal is to reach a specific target state with the highest possible probability. Formally, strategy $\sigma$ overtakes another strategy $\sigma'$, if the probability of reaching the target state within horizon $t$ is larger under $\sigma$ than under $\sigma'$, for all sufficiently large $t\in\NN$. We prove that there exists a pure stationary strategy that is not overtaken by any pure strategy nor by any stationary strategy, under some condition on the transition structure and respectively under genericity. A strategy that is not overtaken by any other strategy, called an overtaking optimal strategy, does not always exist. We provide sufficient conditions for its existence. Next we consider safety objective: the decision maker's goal is to avoid a specific state with the highest possible probability. We argue that the results proven for reachability objective extend to this model. We finally discuss extensions of our results to two-player zero-sum perfect information games.

Related articles: Most relevant | Search more
arXiv:1901.07839 [math.OC] (Published 2019-01-23)
Reinforcement Learning of Markov Decision Processes with Peak Constraints
arXiv:1907.10243 [math.OC] (Published 2019-07-24)
An Overview for Markov Decision Processes in Queues and Networks
arXiv:1512.03873 [math.OC] (Published 2015-12-12)
Structural Results for Partially Observed Markov Decision Processes