arXiv Analytics

Sign in

arXiv:1904.03276 [cs.LG]AbstractReferencesReviewsResources

Synthesized Policies for Transfer and Adaptation across Tasks and Environments

Hexiang Hu, Liyu Chen, Boqing Gong, Fei Sha

Published 2019-04-05Version 1

The ability to transfer in reinforcement learning is key towards building an agent of general artificial intelligence. In this paper, we consider the problem of learning to simultaneously transfer across both environments (ENV) and tasks (TASK), probably more importantly, by learning from only sparse (ENV, TASK) pairs out of all the possible combinations. We propose a novel compositional neural network architecture which depicts a meta rule for composing policies from the environment and task embeddings. Notably, one of the main challenges is to learn the embeddings jointly with the meta rule. We further propose new training methods to disentangle the embeddings, making them both distinctive signatures of the environments and tasks and effective building blocks for composing the policies. Experiments on GridWorld and Thor, of which the agent takes as input an egocentric view, show that our approach gives rise to high success rates on all the (ENV, TASK) pairs after learning from only 40\% of them.

Comments: presented at NeurIPS 2019 as a Spotlight
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1907.01285 [cs.LG] (Published 2019-07-02)
Learning the Arrow of Time
arXiv:1809.01921 [cs.LG] (Published 2018-09-06)
RDPD: Rich Data Helps Poor Data via Imitation
arXiv:1904.00243 [cs.LG] (Published 2019-03-30)
Symmetry-Based Disentangled Representation Learning requires Interaction with Environments