arXiv Analytics

Sign in

arXiv:2211.04878 [cs.LG]AbstractReferencesReviewsResources

Foundation Models for Semantic Novelty in Reinforcement Learning

Tarun Gupta, Peter Karkus, Tong Che, Danfei Xu, Marco Pavone

Published 2022-11-09Version 1

Effectively exploring the environment is a key challenge in reinforcement learning (RL). We address this challenge by defining a novel intrinsic reward based on a foundation model, such as contrastive language image pretraining (CLIP), which can encode a wealth of domain-independent semantic visual-language knowledge about the world. Specifically, our intrinsic reward is defined based on pre-trained CLIP embeddings without any fine-tuning or learning on the target RL task. We demonstrate that CLIP-based intrinsic rewards can drive exploration towards semantically meaningful states and outperform state-of-the-art methods in challenging sparse-reward procedurally-generated environments.

Comments: Foundation Models for Decision Making Workshop at Neural Information Processing Systems, 2022
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:1906.07865 [cs.LG] (Published 2019-06-19)
Adapting Behaviour via Intrinsic Reward: A Survey and Empirical Study
arXiv:1402.0560 [cs.LG] (Published 2014-02-04)
Safe Exploration of State and Action Spaces in Reinforcement Learning
arXiv:1005.0125 [cs.LG] (Published 2010-05-02)
Adaptive Bases for Reinforcement Learning