arXiv Analytics

Sign in

arXiv:2310.08669 [cs.CV]AbstractReferencesReviewsResources

Multimodal Large Language Model for Visual Navigation

Yao-Hung Hubert Tsai, Vansh Dhar, Jialu Li, Bowen Zhang, Jian Zhang

Published 2023-10-12Version 1

Recent efforts to enable visual navigation using large language models have mainly focused on developing complex prompt systems. These systems incorporate instructions, observations, and history into massive text prompts, which are then combined with pre-trained large language models to facilitate visual navigation. In contrast, our approach aims to fine-tune large language models for visual navigation without extensive prompt engineering. Our design involves a simple text prompt, current observations, and a history collector model that gathers information from previous observations as input. For output, our design provides a probability distribution of possible actions that the agent can take during navigation. We train our model using human demonstrations and collision signals from the Habitat-Matterport 3D Dataset (HM3D). Experimental results demonstrate that our method outperforms state-of-the-art behavior cloning methods and effectively reduces collision rates.

Related articles: Most relevant | Search more
arXiv:2503.08507 [cs.CV] (Published 2025-03-11, updated 2025-05-12)
Referring to Any Person
Qing Jiang et al.
arXiv:2306.13549 [cs.CV] (Published 2023-06-23)
A Survey on Multimodal Large Language Models
arXiv:2312.02483 [cs.CV] (Published 2023-12-05)
EtC: Temporal Boundary Expand then Clarify for Weakly Supervised Video Grounding with Multimodal Large Language Model