arXiv Analytics

Sign in

arXiv:1401.3198 [math.OC]AbstractReferencesReviewsResources

Online Markov decision processes with Kullback-Leibler control cost

Peng Guan, Maxim Raginsky, Rebecca Willett

Published 2014-01-14Version 1

This paper considers an online (real-time) control problem that involves an agent performing a discrete-time random walk over a finite state space. The agent's action at each time step is to specify the probability distribution for the next state given the current state. Following the set-up of Todorov, the state-action cost at each time step is a sum of a state cost and a control cost given by the Kullback-Leibler (KL) divergence between the agent's next-state distribution and that determined by some fixed passive dynamics. The online aspect of the problem is due to the fact that the state cost functions are generated by a dynamic environment, and the agent learns the current state cost only after selecting an action. An explicit construction of a computationally efficient strategy with small regret (i.e., expected difference between its actual total cost and the smallest cost attainable using noncausal knowledge of the state costs) under mild regularity conditions is presented, along with a demonstration of the performance of the proposed strategy on a simulated target tracking problem. A number of new results on Markov decision processes with KL control cost are also obtained.

Comments: to appear in IEEE Transactions on Automatic Control
Categories: math.OC, cs.LG, cs.SY
Related articles: Most relevant | Search more
arXiv:1308.1747 [math.OC] (Published 2013-08-08)
Sequence-based Anytime Control
arXiv:1605.04591 [math.OC] (Published 2016-05-15)
Ordinary Differential Equation Methods For Markov Decision Processes and Application to Kullback-Leibler Control Cost
arXiv:2405.16490 [math.OC] (Published 2024-05-26)
Formalising the intentional stance: attributing goals and beliefs to stochastic processes