arXiv Analytics

Sign in

arXiv:1611.05594 [cs.CV]AbstractReferencesReviewsResources

SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning

Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Tat-Seng Chua

Published 2016-11-17Version 1

Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN which encodes an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism --- a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. SCA-CNN achieves significant improvements over state-of-the-art visual attention-based image captioning methods.

Related articles: Most relevant | Search more
arXiv:1510.05970 [cs.CV] (Published 2015-10-20)
Stereo Matching by Training a Convolutional Neural Network to Compare Image Patches
arXiv:1510.07391 [cs.CV] (Published 2015-10-26)
Vehicle Color Recognition using Convolutional Neural Network
arXiv:1505.02146 [cs.CV] (Published 2015-05-08)
DeepBox: Learning Objectness with Convolutional Networks