arXiv Analytics

Sign in

arXiv:1204.1227 [math.OC]AbstractReferencesReviewsResources

An Approximate Newton Method for Markov Decision Processes

Thomas Furmston, David Barber

Published 2012-04-05, updated 2012-11-23Version 3

Gradient-based algorithms are one of the methods of choice for the optimisation of Markov Decision Processes. In this article we will present a novel approximate Newton algorithm for the optimisation of such models. The algorithm has various desirable properties over the naive application of Newton's method. Firstly the approximate Hessian is guaranteed to be negative-semidefinite over the entire parameter space in the case where the controller is log-concave in the control parameters. Additionally the inference required for our approximate Newton method is often the same as that required for first order methods, such as steepest gradient ascent. The approximate Hessian also has many nice sparsity properties that are not present in the Hessian and that make its inversion efficient in many situations of interest. We also provide an analysis that highlights a relationship between our approximate Newton method and both Expectation Maximisation and natural gradient ascent. Empirical results suggest that the algorithm has excellent convergence and robustness properties.

Related articles: Most relevant | Search more
arXiv:1310.7906 [math.OC] (Published 2013-10-29, updated 2015-08-04)
Convergence Analysis of the Approximate Newton Method for Markov Decision Processes
arXiv:2412.12879 [math.OC] (Published 2024-12-17)
Robust Deterministic Policies for Markov Decision Processes under Budgeted Uncertainty
arXiv:2406.05086 [math.OC] (Published 2024-06-07)
Robust Reward Design for Markov Decision Processes