arXiv Analytics

Sign in

arXiv:2407.09413 [cs.CL]AbstractReferencesReviewsResources

SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers

Shraman Pramanick, Rama Chellappa, Subhashini Venugopalan

Published 2024-07-12Version 1

Seeking answers to questions within long scientific research articles is a crucial area of study that aids readers in quickly addressing their inquiries. However, existing question-answering (QA) datasets based on scientific papers are limited in scale and focus solely on textual content. To address this limitation, we introduce SPIQA (Scientific Paper Image Question Answering), the first large-scale QA dataset specifically designed to interpret complex figures and tables within the context of scientific research articles across various domains of computer science. Leveraging the breadth of expertise and ability of multimodal large language models (MLLMs) to understand figures, we employ automatic and manual curation to create the dataset. We craft an information-seeking task involving multiple images that cover a wide variety of plots, charts, tables, schematic diagrams, and result visualizations. SPIQA comprises 270K questions divided into training, validation, and three different evaluation splits. Through extensive experiments with 12 prominent foundational models, we evaluate the ability of current multimodal systems to comprehend the nuanced aspects of research articles. Additionally, we propose a Chain-of-Thought (CoT) evaluation strategy with in-context retrieval that allows fine-grained, step-by-step assessment and improves model performance. We further explore the upper bounds of performance enhancement with additional textual information, highlighting its promising potential for future research and the dataset's impact on revolutionizing how we interact with scientific literature.

Related articles: Most relevant | Search more
arXiv:2410.21414 [cs.CL] (Published 2024-10-28)
CT2C-QA: Multimodal Question Answering over Chinese Text, Table and Chart
arXiv:2310.03017 [cs.CL] (Published 2023-10-04)
Multimodal Question Answering for Unified Information Extraction
arXiv:2405.11215 [cs.CL] (Published 2024-05-18)
MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing