arXiv Analytics

Sign in

arXiv:2006.07808 [cs.LG]AbstractReferencesReviewsResources

Reinforcement Learning with Supervision from Noisy Demonstrations

Kun-Peng Ning, Sheng-Jun Huang

Published 2020-06-14Version 1

Reinforcement learning has achieved great success in various applications. To learn an effective policy for the agent, it usually requires a huge amount of data by interacting with the environment, which could be computational costly and time consuming. To overcome this challenge, the framework called Reinforcement Learning with Expert Demonstrations (RLED) was proposed to exploit the supervision from expert demonstrations. Although the RLED methods can reduce the number of learning iterations, they usually assume the demonstrations are perfect, and thus may be seriously misled by the noisy demonstrations in real applications. In this paper, we propose a novel framework to adaptively learn the policy by jointly interacting with the environment and exploiting the expert demonstrations. Specifically, for each step of the demonstration trajectory, we form an instance, and define a joint loss function to simultaneously maximize the expected reward and minimize the difference between agent behaviors and demonstrations. Most importantly, by calculating the expected gain of the value function, we assign each instance with a weight to estimate its potential utility, and thus can emphasize the more helpful demonstrations while filter out noisy ones. Experimental results in various environments with multiple popular reinforcement learning algorithms show that the proposed approach can learn robustly with noisy demonstrations, and achieve higher performance in fewer iterations.

Related articles: Most relevant | Search more
arXiv:1907.01285 [cs.LG] (Published 2019-07-02)
Learning the Arrow of Time
arXiv:1904.00243 [cs.LG] (Published 2019-03-30)
Symmetry-Based Disentangled Representation Learning requires Interaction with Environments
arXiv:1908.10479 [cs.LG] (Published 2019-08-27)
Exploration-Enhanced POLITEX