arXiv Analytics

Sign in

arXiv:2402.14683 [cs.CV]AbstractReferencesReviewsResources

Visual Hallucinations of Multi-modal Large Language Models

Wen Huang, Hongbin Liu, Minxin Guo, Neil Zhenqiang Gong

Published 2024-02-22, updated 2024-06-16Version 2

Visual hallucination (VH) means that a multi-modal LLM (MLLM) imagines incorrect details about an image in visual question answering. Existing studies find VH instances only in existing image datasets, which results in biased understanding of MLLMs' performance under VH due to limited diversity of such VH instances. In this work, we propose a tool called VHTest to generate a diverse set of VH instances. Specifically, VHTest finds some initial VH instances in existing image datasets (e.g., COCO), generates a text description for each VH mode, and uses a text-to-image generative model (e.g., DALL-E-3) to generate VH images based on the text descriptions. We collect a benchmark dataset with 1,200 VH instances in 8 VH modes using VHTest. We find that existing MLLMs such as GPT-4V, LLaVA-1.5, and MiniGPT-v2 hallucinate for a large fraction of the instances in our benchmark. Moreover, we find that fine-tuning an MLLM using our benchmark dataset reduces its likelihood to hallucinate without sacrificing its performance on other benchmarks. Our benchmarks are publicly available: https://github.com/wenhuang2000/VHTest.

Comments: To appear in ACL Findings, 2024
Categories: cs.CV, cs.AI, cs.LG
Related articles: Most relevant | Search more
arXiv:2501.15140 [cs.CV] (Published 2025-01-25)
Analyzing and Boosting the Power of Fine-Grained Visual Recognition for Multi-modal Large Language Models
arXiv:2402.05935 [cs.CV] (Published 2024-02-08)
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Peng Gao et al.
arXiv:2404.12966 [cs.CV] (Published 2024-04-19)
Eyes Can Deceive: Benchmarking Counterfactual Reasoning Abilities of Multi-modal Large Language Models