arXiv Analytics

Sign in

arXiv:2503.08507 [cs.CV]AbstractReferencesReviewsResources

Referring to Any Person

Qing Jiang, Lin Wu, Zhaoyang Zeng, Tianhe Ren, Yuda Xiong, Yihao Chen, Qin Liu, Lei Zhang

Published 2025-03-11, updated 2025-05-12Version 2

Humans are undoubtedly the most important participants in computer vision, and the ability to detect any individual given a natural language description, a task we define as referring to any person, holds substantial practical value. However, we find that existing models generally fail to achieve real-world usability, and current benchmarks are limited by their focus on one-to-one referring, that hinder progress in this area. In this work, we revisit this task from three critical perspectives: task definition, dataset design, and model architecture. We first identify five aspects of referable entities and three distinctive characteristics of this task. Next, we introduce HumanRef, a novel dataset designed to tackle these challenges and better reflect real-world applications. From a model design perspective, we integrate a multimodal large language model with an object detection framework, constructing a robust referring model named RexSeek. Experimental results reveal that state-of-the-art models, which perform well on commonly used benchmarks like RefCOCO/+/g, struggle with HumanRef due to their inability to detect multiple individuals. In contrast, RexSeek not only excels in human referring but also generalizes effectively to common object referring, making it broadly applicable across various perception tasks. Code is available at https://github.com/IDEA-Research/RexSeek

Related articles: Most relevant | Search more
arXiv:2312.06968 [cs.CV] (Published 2023-12-12)
Hallucination Augmented Contrastive Learning for Multimodal Large Language Model
Chaoya Jiang et al.
arXiv:2505.10769 [cs.CV] (Published 2025-05-16)
Unifying Segment Anything in Microscopy with Multimodal Large Language Model
arXiv:2501.13707 [cs.CV] (Published 2025-01-23)
EventVL: Understand Event Streams via Multimodal Large Language Model