arXiv Analytics

Sign in

arXiv:2405.14974 [cs.CV]AbstractReferencesReviewsResources

LOVA3: Learning to Visual Question Answering, Asking and Assessment

Henry Hengyuan Zhao, Pan Zhou, Difei Gao, Mike Zheng Shou

Published 2024-05-23Version 1

Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge. By enhancing these capabilities, humans can more effectively utilize data, leading to better comprehension and learning outcomes. However, current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills. In this study, we introduce LOVA3, an innovative framework named ``Learning tO Visual Question Answering, Asking and Assessment,'' designed to equip MLLMs with these additional capabilities. Our approach involves the creation of two supplementary training tasks GenQA and EvalQA, aiming at fostering the skills of asking and assessing questions in the context of images. To develop the questioning ability, we compile a comprehensive set of multimodal foundational tasks. For assessment, we introduce a new benchmark called EvalQABench, comprising 64,000 training samples (split evenly between positive and negative samples) and 5,000 testing samples. We posit that enhancing MLLMs with the capabilities to answer, ask, and assess questions will improve their multimodal comprehension and lead to better performance. We validate our hypothesis by training an MLLM using the LOVA3 framework and testing it on 10 multimodal benchmarks. The results demonstrate consistent performance improvements, thereby confirming the efficacy of our approach.

Comments: The code is available at https://github.com/showlab/LOVA3
Categories: cs.CV, cs.AI, cs.CL
Related articles: Most relevant | Search more
arXiv:1902.09487 [cs.CV] (Published 2019-02-25)
MUREL: Multimodal Relational Reasoning for Visual Question Answering
arXiv:1704.08243 [cs.CV] (Published 2017-04-26)
C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset
arXiv:1610.01465 [cs.CV] (Published 2016-10-05)
Visual Question Answering: Datasets, Algorithms, and Future Challenges