arXiv Analytics

Sign in

arXiv:2007.14535 [cs.LG]AbstractReferencesReviewsResources

Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction

Masashi Okada, Tadahiro Taniguchi

Published 2020-07-29Version 1

In the present paper, we propose a decoder-free extension of Dreamer, a leading model-based reinforcement learning (MBRL) method from pixels. Dreamer is a sample- and cost-efficient solution to robot learning, as it is used to train latent state-space models based on a variational autoencoder and to conduct policy optimization by latent trajectory imagination. However, this autoencoding based approach often causes object vanishing, in which the autoencoder fails to perceives key objects for solving control tasks, and thus significantly limiting Dreamer's potential. This work aims to relieve this Dreamer's bottleneck and enhance its performance by means of removing the decoder. For this purpose, we firstly derive a likelihood-free and InfoMax objective of contrastive learning from the evidence lower bound of Dreamer. Secondly, we incorporate two components, (i) independent linear dynamics and (ii) the random crop data augmentation, to the learning scheme so as to improve the training performance. In comparison to Dreamer and other recent model-free reinforcement learning methods, our newly devised Dreamer with InfoMax and without generative decoder (Dreaming) achieves the best scores on 5 difficult simulated robotics tasks, in which Dreamer suffers from object vanishing.

Related articles: Most relevant | Search more
arXiv:2102.13651 [cs.LG] (Published 2021-02-26)
On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning
Baohe Zhang et al.
arXiv:2208.14501 [cs.LG] (Published 2022-08-30)
Model-Based Reinforcement Learning with SINDy
arXiv:2403.19024 [cs.LG] (Published 2024-03-27)
Exploiting Symmetry in Dynamics for Model-Based Reinforcement Learning with Asymmetric Rewards