arXiv Analytics

Sign in

arXiv:math/0506489 [math.OC]AbstractReferencesReviewsResources

Acceleration Operators in the Value Iteration Algorithms for Markov Decision Processes

Oleksandr Shlakhter, Chi-Guhn Lee, Dmitry Khmelev, Nasser Jaber

Published 2005-06-23, updated 2008-03-27Version 2

We study the general approach to accelerating the convergence of the most widely used solution method of Markov decision processes with the total expected discounted reward. Inspired by the monotone behavior of the contraction mappings in the feasible set of the linear programming problem equivalent to the MDP, we establish a class of operators that can be used in combination with a contraction mapping operator in the standard value iteration algorithm and its variants. We then propose two such operators, which can be easily implemented as part of the value iteration algorithm and its variants. Numerical studies show that the computational savings can be significant especially when the discount factor approaches 1 and the transition probability matrix becomes dense, in which the standard value iteration algorithm and its variants suffer from slow convergence.

Related articles: Most relevant | Search more
arXiv:1808.04478 [math.OC] (Published 2018-08-13)
Risk Sensitive Multiple Goal Stochastic Optimization, with application to Risk Sensitive Partially Observed Markov Decision Processes
arXiv:1712.00970 [math.OC] (Published 2017-12-04)
Convex and Lipschitz function approximations for Markov decision processes
arXiv:1310.5770 [math.OC] (Published 2013-10-22, updated 2014-04-26)
Quantized Stationary Control Policies in Markov Decision Processes