arXiv Analytics

Sign in

arXiv:2001.08730 [cs.CV]AbstractReferencesReviewsResources

Robust Explanations for Visual Question Answering

Badri N. Patro, Shivansh Pate, Vinay P. Namboodiri

Published 2020-01-23Version 1

In this paper, we propose a method to obtain robust explanations for visual question answering(VQA) that correlate well with the answers. Our model explains the answers obtained through a VQA model by providing visual and textual explanations. The main challenges that we address are i) Answers and textual explanations obtained by current methods are not well correlated and ii) Current methods for visual explanation do not focus on the right location for explaining the answer. We address both these challenges by using a collaborative correlated module which ensures that even if we do not train for noise based attacks, the enhanced correlation ensures that the right explanation and answer can be generated. We further show that this also aids in improving the generated visual and textual explanations. The use of the correlated module can be thought of as a robust method to verify if the answer and explanations are coherent. We evaluate this model using VQA-X dataset. We observe that the proposed method yields better textual and visual justification that supports the decision. We showcase the robustness of the model against a noise-based perturbation attack using corresponding visual and textual explanations. A detailed empirical analysis is shown. Here we provide source code link for our model \url{https://github.com/DelTA-Lab-IITK/CCM-WACV}.

Related articles: Most relevant | Search more
arXiv:1909.01860 [cs.CV] (Published 2019-08-27)
Visual Question Answering using Deep Learning: A Survey and Performance Analysis
arXiv:1902.09487 [cs.CV] (Published 2019-02-25)
MUREL: Multimodal Relational Reasoning for Visual Question Answering
arXiv:1704.08243 [cs.CV] (Published 2017-04-26)
C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset