arXiv Analytics

Sign in

arXiv:1909.07373 [cs.LG]AbstractReferencesReviewsResources

Policy Prediction Network: Model-Free Behavior Policy with Model-Based Learning in Continuous Action Space

Zac Wellmer, James Kwok

Published 2019-09-15Version 1

This paper proposes a novel deep reinforcement learning architecture that was inspired by previous tree structured architectures which were only useable in discrete action spaces. Policy Prediction Network offers a way to improve sample complexity and performance on continuous control problems in exchange for extra computation at training time but at no cost in computation at rollout time. Our approach integrates a mix between model-free and model-based reinforcement learning. Policy Prediction Network is the first to introduce implicit model-based learning to Policy Gradient algorithms for continuous action space and is made possible via the empirically justified clipping scheme. Our experiments are focused on the MuJoCo environments so that they can be compared with similar work done in this area.

Related articles: Most relevant | Search more
arXiv:1504.01840 [cs.LG] (Published 2015-04-08)
Autonomous CRM Control via CLV Approximation with Deep Reinforcement Learning in Discrete and Continuous Action Space
arXiv:2309.04459 [cs.LG] (Published 2023-09-08)
Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning
arXiv:2302.04009 [cs.LG] (Published 2023-02-08)
Investigating the role of model-based learning in exploration and transfer