arXiv Analytics

Sign in

arXiv:2006.13408 [cs.LG]AbstractReferencesReviewsResources

Control-Aware Representations for Model-based Reinforcement Learning

Brandon Cui, Yinlam Chow, Mohammad Ghavamzadeh

Published 2020-06-24Version 1

A major challenge in modern reinforcement learning (RL) is efficient control of dynamical systems from high-dimensional sensory observations. Learning controllable embedding (LCE) is a promising approach that addresses this challenge by embedding the observations into a lower-dimensional latent space, estimating the latent dynamics, and utilizing it to perform control in the latent space. Two important questions in this area are how to learn a representation that is amenable to the control problem at hand, and how to achieve an end-to-end framework for representation learning and control. In this paper, we take a few steps towards addressing these questions. We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration style algorithm in the latent space. We call this model control-aware representation learning (CARL). We derive a loss function for CARL that has close connection to the prediction, consistency, and curvature (PCC) principle for representation learning. We derive three implementations of CARL. In the offline implementation, we replace the locally-linear control algorithm (e.g.,~iLQR) used by the existing LCE methods with a RL algorithm, namely model-based soft actor-critic, and show that it results in significant improvement. In online CARL, we interleave representation learning and control, and demonstrate further gain in performance. Finally, we propose value-guided CARL, a variation in which we optimize a weighted version of the CARL loss function, where the weights depend on the TD-error of the current policy. We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines.

Related articles: Most relevant | Search more
arXiv:2111.08550 [cs.LG] (Published 2021-11-16, updated 2022-01-23)
On Effective Scheduling of Model-based Reinforcement Learning
Hang Lai et al.
arXiv:2411.11511 [cs.LG] (Published 2024-11-18)
Structure learning with Temporal Gaussian Mixture for model-based Reinforcement Learning
arXiv:1807.03858 [cs.LG] (Published 2018-07-10)
Algorithmic Framework for Model-based Reinforcement Learning with Theoretical Guarantees