arXiv Analytics

Sign in

arXiv:1901.07839 [math.OC]AbstractReferencesReviewsResources

Reinforcement Learning of Markov Decision Processes with Peak Constraints

Ather Gattami

Published 2019-01-23Version 1

In this paper, we consider reinforcement learning of Markov Decision Processes (MDP) with peak constraints, where an agent chooses a policy to optimize an objective and at the same time satisfy additional constraints. The agent has to take actions based on the observed states, reward outputs, and constraint-outputs, without any knowledge about the dynamics, reward functions, and/or the knowledge of the constraint-functions. We introduce a game theoretic approach to construct reinforcement learning algorithms where the agent maximizes an unconstrained objective that depends on the simulated action of the minimizing opponent which acts on a finite set of actions and the output data of the constraint functions (rewards). We show that the policies obtained from maximin Q-learning converge to the optimal policies. To the best of our knowledge, this is the first time learning algorithms guarantee convergence to optimal stationary policies for the MDP problem with peak constraints for both discounted and expected average rewards.

Related articles: Most relevant | Search more
arXiv:1512.07669 [math.OC] (Published 2015-12-23)
Reinforcement Learning: Stochastic Approximation Algorithms for Markov Decision Processes
arXiv:2003.02894 [math.OC] (Published 2020-03-05)
Distributional Robustness and Regularization in Reinforcement Learning
arXiv:1906.11392 [math.OC] (Published 2019-06-27)
From self-tuning regulators to reinforcement learning and back again