arXiv Analytics

Sign in

arXiv:2003.13982 [math.PR]AbstractReferencesReviewsResources

Existence of optimal delay-dependent control for finite-horizon continuous-time Markov decision process

Zhong-Wei Liao, Jinghai Shao

Published 2020-03-31Version 1

This paper intends to study the optimal control problem for the continuous-time Markov decision process with denumerable states and compact action space. The admissible controls depend not only on the current state of the jumping process but also on its history. By the compactification method, we show the existence of an optimal delay-dependent control under some explicit conditions, and further establish the dynamic programming principle. Moreover, we show that the value function is the unique viscosity solution of certain Hamilton-Jacobi-Bellman equation which does not depend on the delay-dependent control policies. Consequently, under our explicit conditions, there is no impact on the value function to make decision depending on or not on the history of the jumping process.

Related articles: Most relevant | Search more
arXiv:1607.08046 [math.PR] (Published 2016-07-27)
On the link between infinite horizon control and quasi-stationary distributions
arXiv:2002.01084 [math.PR] (Published 2020-02-04)
On the analyticity of the value function in optimal investment
arXiv:2101.00546 [math.PR] (Published 2021-01-03)
Optimal stopping time on discounted semi-Markov processes