arXiv Analytics

Sign in

arXiv:1609.05600 [cs.CV]AbstractReferencesReviewsResources

Graph-Structured Representations for Visual Question Answering

Damien Teney, Lingqiao Liu, Anton van den Hengel

Published 2016-09-19Version 1

This paper proposes to improve visual question answering (VQA) with structured representations of both scene contents and questions. A key challenge in VQA is to require joint reasoning over the visual and text domains. The predominant CNN/LSTM-based approach to VQA is limited by monolithic vector representations that largely ignore structure in the scene and in the form of the question. CNN feature vectors cannot effectively capture situations as simple as multiple object instances, and LSTMs process questions as series of words, which does not reflect the true complexity of language structure. We instead propose to build graphs over the scene objects and over the question words, and we describe a deep neural network that exploits the structure in these representations. This shows significant benefit over the sequential processing of LSTMs. The overall efficacy of our approach is demonstrated by significant improvements over the state-of-the-art, from 71.2% to 74.4% in accuracy on the "abstract scenes" multiple-choice benchmark, and from 34.7% to 39.1% in accuracy over pairs of "balanced" scenes, i.e. images with fine-grained differences and opposite yes/no answers to a same question.

Related articles: Most relevant | Search more
arXiv:1611.08998 [cs.CV] (Published 2016-11-28)
DeepSetNet: Predicting Sets with Deep Neural Networks
arXiv:1707.07312 [cs.CV] (Published 2017-07-23)
A new take on measuring nutritional density: The feasibility of using a deep neural network to assess commercially-prepared puree concentrations
arXiv:1811.02644 [cs.CV] (Published 2018-10-25)
DeepDPM: Dynamic Population Mapping via Deep Neural Network