arXiv Analytics

Sign in

arXiv:1912.02714 [cs.LG]AbstractReferencesReviewsResources

Inferring the Optimal Policy using Markov Chain Monte Carlo

Brandon Trabucco, Albert Qu, Simon Li, Ganeshkumar Ashokavardhanan

Published 2019-11-16Version 1

This paper investigates methods for estimating the optimal stochastic control policy for a Markov Decision Process with unknown transition dynamics and an unknown reward function. This form of model-free reinforcement learning comprises many real world systems such as playing video games, simulated control tasks, and real robot locomotion. Existing methods for estimating the optimal stochastic control policy rely on high variance estimates of the policy descent. However, these methods are not guaranteed to find the optimal stochastic policy, and the high variance gradient estimates make convergence unstable. In order to resolve these problems, we propose a technique using Markov Chain Monte Carlo to generate samples from the posterior distribution of the parameters conditioned on being optimal. Our method provably converges to the globally optimal stochastic policy, and empirically similar variance compared to the policy gradient.

Related articles: Most relevant | Search more
arXiv:2412.17136 [cs.LG] (Published 2024-12-22)
Empirical evaluation of normalizing flows in Markov Chain Monte Carlo
arXiv:1303.4169 [cs.LG] (Published 2013-03-18)
Markov Chain Monte Carlo for Arrangement of Hyperplanes in Locality-Sensitive Hashing
arXiv:2205.08803 [cs.LG] (Published 2022-05-18)
Markov Chain Monte Carlo for Continuous-Time Switching Dynamical Systems