arXiv Analytics

Sign in

arXiv:2405.11215 [cs.CL]AbstractReferencesReviewsResources

MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing

Siddhant Agarwal, Shivam Sharma, Preslav Nakov, Tanmoy Chakraborty

Published 2024-05-18Version 1

Memes have evolved as a prevalent medium for diverse communication, ranging from humour to propaganda. With the rising popularity of image-focused content, there is a growing need to explore its potential harm from different aspects. Previous studies have analyzed memes in closed settings - detecting harm, applying semantic labels, and offering natural language explanations. To extend this research, we introduce MemeMQA, a multimodal question-answering framework aiming to solicit accurate responses to structured questions while providing coherent explanations. We curate MemeMQACorpus, a new dataset featuring 1,880 questions related to 1,122 memes with corresponding answer-explanation pairs. We further propose ARSENAL, a novel two-stage multimodal framework that leverages the reasoning capabilities of LLMs to address MemeMQA. We benchmark MemeMQA using competitive baselines and demonstrate its superiority - ~18% enhanced answer prediction accuracy and distinct text generation lead across various metrics measuring lexical and semantic alignment over the best baseline. We analyze ARSENAL's robustness through diversification of question-set, confounder-based evaluation regarding MemeMQA's generalizability, and modality-specific assessment, enhancing our understanding of meme interpretation in the multimodal communication landscape.

Related articles: Most relevant | Search more
arXiv:2310.03017 [cs.CL] (Published 2023-10-04)
Multimodal Question Answering for Unified Information Extraction
arXiv:2407.09413 [cs.CL] (Published 2024-07-12)
SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers
arXiv:2410.21414 [cs.CL] (Published 2024-10-28)
CT2C-QA: Multimodal Question Answering over Chinese Text, Table and Chart