arXiv Analytics

Sign in

arXiv:2006.03662 [cs.LG]AbstractReferencesReviewsResources

Rapid Task-Solving in Novel Environments

Sam Ritter, Ryan Faulkner, Laurent Sartran, Adam Santoro, Matt Botvinick, David Raposo

Published 2020-06-05Version 1

When thrust into an unfamiliar environment and charged with solving a series of tasks, an effective agent should (1) leverage prior knowledge to solve its current task while (2) efficiently exploring to gather knowledge for use in future tasks, and then (3) plan using that knowledge when faced with new tasks in that same environment. We introduce two domains for conducting research on this challenge, and find that state-of-the-art deep reinforcement learning (RL) agents fail to plan in novel environments. We develop a recursive implicit planning module that operates over episodic memories, and show that the resulting deep-RL agent is able to explore and plan in novel environments, outperforming the nearest baseline by factors of 2-3 across the two domains. We find evidence that our module (1) learned to execute a sensible information-propagating algorithm and (2) generalizes to situations beyond its training experience.

Related articles: Most relevant | Search more
arXiv:1910.01751 [cs.LG] (Published 2019-10-03)
Causal Induction from Visual Observations for Goal Directed Tasks
arXiv:1606.04671 [cs.LG] (Published 2016-06-15)
Progressive Neural Networks
arXiv:2106.04546 [cs.LG] (Published 2021-06-08)
LEADS: Learning Dynamical Systems that Generalize Across Environments