arXiv Analytics

Sign in

arXiv:1910.01751 [cs.LG]AbstractReferencesReviewsResources

Causal Induction from Visual Observations for Goal Directed Tasks

Suraj Nair, Yuke Zhu, Silvio Savarese, Li Fei-Fei

Published 2019-10-03Version 1

Causal reasoning has been an indispensable capability for humans and other intelligent animals to interact with the physical world. In this work, we propose to endow an artificial agent with the capability of causal reasoning for completing goal-directed tasks. We develop learning-based approaches to inducing causal knowledge in the form of directed acyclic graphs, which can be used to contextualize a learned goal-conditional policy to perform tasks in novel environments with latent causal structures. We leverage attention mechanisms in our causal induction model and goal-conditional policy, enabling us to incrementally generate the causal graph from the agent's visual observations and to selectively use the induced graph for determining actions. Our experiments show that our method effectively generalizes towards completing new tasks in novel environments with previously unseen causal structures.

Related articles: Most relevant | Search more
arXiv:2006.03662 [cs.LG] (Published 2020-06-05)
Rapid Task-Solving in Novel Environments
arXiv:2106.04546 [cs.LG] (Published 2021-06-08)
LEADS: Learning Dynamical Systems that Generalize Across Environments
arXiv:2110.12301 [cs.LG] (Published 2021-10-23, updated 2022-03-17)
Map Induction: Compositional spatial submap learning for efficient exploration in novel environments