arXiv Analytics

Sign in

arXiv:2403.19024 [cs.LG]AbstractReferencesReviewsResources

Exploiting Symmetry in Dynamics for Model-Based Reinforcement Learning with Asymmetric Rewards

Yasin Sonmez, Neelay Junnarkar, Murat Arcak

Published 2024-03-27Version 1

Recent work in reinforcement learning has leveraged symmetries in the model to improve sample efficiency in training a policy. A commonly used simplifying assumption is that the dynamics and reward both exhibit the same symmetry. However, in many real-world environments, the dynamical model exhibits symmetry independent of the reward model: the reward may not satisfy the same symmetries as the dynamics. In this paper, we investigate scenarios where only the dynamics are assumed to exhibit symmetry, extending the scope of problems in reinforcement learning and learning in control theory where symmetry techniques can be applied. We use Cartan's moving frame method to introduce a technique for learning dynamics which, by construction, exhibit specified symmetries. We demonstrate through numerical experiments that the proposed method learns a more accurate dynamical model.

Related articles: Most relevant | Search more
arXiv:2208.14501 [cs.LG] (Published 2022-08-30)
Model-Based Reinforcement Learning with SINDy
arXiv:2007.14535 [cs.LG] (Published 2020-07-29)
Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction
arXiv:2002.10621 [cs.LG] (Published 2020-02-25)
Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements