arXiv Analytics

Sign in

arXiv:2502.07279 [cs.LG]AbstractReferencesReviewsResources

Exploratory Diffusion Model for Unsupervised Reinforcement Learning

Chengyang Ying, Huayu Chen, Xinning Zhou, Zhongkai Hao, Hang Su, Jun Zhu

Published 2025-02-11, updated 2025-05-16Version 2

Unsupervised reinforcement learning (URL) aims to pre-train agents by exploring diverse states or skills in reward-free environments, facilitating efficient adaptation to downstream tasks. As the agent cannot access extrinsic rewards during unsupervised exploration, existing methods design intrinsic rewards to model the explored data and encourage further exploration. However, the explored data are always heterogeneous, posing the requirements of powerful representation abilities for both intrinsic reward models and pre-trained policies. In this work, we propose the Exploratory Diffusion Model (ExDM), which leverages the strong expressive ability of diffusion models to fit the explored data, simultaneously boosting exploration and providing an efficient initialization for downstream tasks. Specifically, ExDM can accurately estimate the distribution of collected data in the replay buffer with the diffusion model and introduces the score-based intrinsic reward, encouraging the agent to explore less-visited states. After obtaining the pre-trained policies, ExDM enables rapid adaptation to downstream tasks. In detail, we provide theoretical analyses and practical algorithms for fine-tuning diffusion policies, addressing key challenges such as training instability and computational complexity caused by multi-step sampling. Extensive experiments demonstrate that ExDM outperforms existing SOTA baselines in efficient unsupervised exploration and fast fine-tuning downstream tasks, especially in structurally complicated environments.

Related articles: Most relevant | Search more
arXiv:2211.03782 [cs.LG] (Published 2022-11-07)
On minimal variations for unsupervised representation learning
arXiv:2309.17002 [cs.LG] (Published 2023-09-29)
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
Hao Chen et al.
arXiv:2310.15318 [cs.LG] (Published 2023-10-23)
HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous Graph Neural Networks