arXiv Analytics

Sign in

arXiv:2002.10621 [cs.LG]AbstractReferencesReviewsResources

Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements

Alberto Dalla Libera, Diego Romeres, Devesh K. Jha, Bill Yerazunis, Daniel Nikovski

Published 2020-02-25Version 1

In this paper, we propose a derivative-free model learning framework for Reinforcement Learning (RL) algorithms based on Gaussian Process Regression (GPR). In many mechanical systems, only positions can be measured by the sensing instruments. Then, instead of representing the system state as suggested by the physics with a collection of positions, velocities, and accelerations, we define the state as the set of past position measurements. However, the equation of motions derived by physical first principles cannot be directly applied in this framework, being functions of velocities and accelerations. For this reason, we introduce a novel derivative-free physically-inspired kernel, which can be easily combined with nonparametric derivative-free Gaussian Process models. Tests performed on two real platforms show that the considered state definition combined with the proposed model improves estimation performance and data-efficiency w.r.t. traditional models based on GPR. Finally, we validate the proposed framework by solving two RL control problems for two real robotic systems.

Related articles: Most relevant | Search more
arXiv:2002.04523 [cs.LG] (Published 2020-02-11)
Objective Mismatch in Model-based Reinforcement Learning
arXiv:2208.14501 [cs.LG] (Published 2022-08-30)
Model-Based Reinforcement Learning with SINDy
arXiv:2007.14535 [cs.LG] (Published 2020-07-29)
Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction