arXiv Analytics

Sign in

arXiv:2012.09417 [math.OC]AbstractReferencesReviewsResources

A Note on Optimization Formulations of Markov Decision Processes

Lexing Ying, Yuhua Zhu

Published 2020-12-17Version 1

This note summarizes the optimization formulations used in the study of Markov decision processes. We consider both the discounted and undiscounted processes under the standard and the entropy-regularized settings. For each setting, we first summarize the primal, dual, and primal-dual problems of the linear programming formulation. We then detail the connections between these problems and other formulations for Markov decision processes such as the Bellman equation and the policy gradient method.

Related articles: Most relevant | Search more
arXiv:1202.6259 [math.OC] (Published 2012-02-28)
A distance for probability spaces, and long-term values in Markov Decision Processes and Repeated Games
arXiv:2201.07908 [math.OC] (Published 2022-01-19)
Markov decision processes with observation costs
arXiv:1310.7906 [math.OC] (Published 2013-10-29, updated 2015-08-04)
Convergence Analysis of the Approximate Newton Method for Markov Decision Processes