arXiv Analytics

Sign in

arXiv:1606.00061 [cs.CV]AbstractReferencesReviewsResources

Hierarchical Co-Attention for Visual Question Answering

Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh

Published 2016-05-31Version 1

A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling "where to look" or visual attention, it is equally important to model "what words to listen to" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question and consequently the image via the co-attention mechanism in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN) model. Our final model outperforms all reported methods, improving the state-of-the-art on the VQA dataset from 60.4% to 62.1%, and from 61.6% to 65.4% on the COCO-QA dataset.

Related articles: Most relevant | Search more
arXiv:1707.04968 [cs.CV] (Published 2017-07-17)
Visual Question Answering with Memory-Augmented Networks
arXiv:1907.12133 [cs.CV] (Published 2019-07-28)
An Empirical Study on Leveraging Scene Graphs for Visual Question Answering
arXiv:1511.02570 [cs.CV] (Published 2015-11-09)
Explicit Knowledge-based Reasoning for Visual Question Answering