arXiv Analytics

Sign in

arXiv:2404.12966 [cs.CV]AbstractReferencesReviewsResources

Eyes Can Deceive: Benchmarking Counterfactual Reasoning Abilities of Multi-modal Large Language Models

Yian Li, Wentao Tian, Yang Jiao, Jingjing Chen, Yu-Gang Jiang

Published 2024-04-19Version 1

Counterfactual reasoning, as a crucial manifestation of human intelligence, refers to making presuppositions based on established facts and extrapolating potential outcomes. Existing multimodal large language models (MLLMs) have exhibited impressive cognitive and reasoning capabilities, which have been examined across a wide range of Visual Question Answering (VQA) benchmarks. Nevertheless, how will existing MLLMs perform when faced with counterfactual questions? To answer this question, we first curate a novel \textbf{C}ounter\textbf{F}actual \textbf{M}ulti\textbf{M}odal reasoning benchmark, abbreviated as \textbf{CFMM}, to systematically assess the counterfactual reasoning capabilities of MLLMs. Our CFMM comprises six challenging tasks, each including hundreds of carefully human-labeled counterfactual questions, to evaluate MLLM's counterfactual reasoning capabilities across diverse aspects. Through experiments, interestingly, we find that existing MLLMs prefer to believe what they see, but ignore the counterfactual presuppositions presented in the question, thereby leading to inaccurate responses. Furthermore, we evaluate a wide range of prevalent MLLMs on our proposed CFMM. The significant gap between their performance on our CFMM and that on several VQA benchmarks indicates that there is still considerable room for improvement in existing MLLMs toward approaching human-level intelligence. On the other hand, through boosting MLLMs performances on our CFMM in the future, potential avenues toward developing MLLMs with advanced intelligence can be explored.

Related articles: Most relevant | Search more
arXiv:2411.00304 [cs.CV] (Published 2024-11-01)
Unified Generative and Discriminative Training for Multi-modal Large Language Models
Wei Chow et al.
arXiv:2408.12867 [cs.CV] (Published 2024-08-23)
Semantic Alignment for Multimodal Large Language Models
Tao Wu et al.
arXiv:2409.10197 [cs.CV] (Published 2024-09-16, updated 2024-12-25)
Fit and Prune: Fast and Training-free Visual Token Pruning for Multi-modal Large Language Models