arXiv Analytics

Sign in

arXiv:1704.08243 [cs.CV]AbstractReferencesReviewsResources

C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset

Aishwarya Agrawal, Aniruddha Kembhavi, Dhruv Batra, Devi Parikh

Published 2017-04-26Version 1

Visual Question Answering (VQA) has received a lot of attention over the past couple of years. A number of deep learning models have been proposed for this task. However, it has been shown that these models are heavily driven by superficial correlations in the training data and lack compositionality -- the ability to answer questions about unseen compositions of seen concepts. This compositionality is desirable and central to intelligence. In this paper, we propose a new setting for Visual Question Answering where the test question-answer pairs are compositionally novel compared to training question-answer pairs. To facilitate developing models under this setting, we present a new compositional split of the VQA v1.0 dataset, which we call Compositional VQA (C-VQA). We analyze the distribution of questions and answers in the C-VQA splits. Finally, we evaluate several existing VQA models under this new setting and show that the performances of these models degrade by a significant amount compared to the original VQA setting.

Related articles: Most relevant | Search more
arXiv:2001.08730 [cs.CV] (Published 2020-01-23)
Robust Explanations for Visual Question Answering
arXiv:1902.09487 [cs.CV] (Published 2019-02-25)
MUREL: Multimodal Relational Reasoning for Visual Question Answering
arXiv:1906.10169 [cs.CV] (Published 2019-06-24)
RUBi: Reducing Unimodal Biases in Visual Question Answering