arXiv Analytics

Sign in

arXiv:1706.02430 [cs.CV]AbstractReferencesReviewsResources

Image Captioning with Object Detection and Localization

Zhongliang Yang, Yu-Jin Zhang, Sadaqat ur Rehman, Yongfeng Huang

Published 2017-06-08Version 1

Automatically generating a natural language description of an image is a task close to the heart of image understanding. In this paper, we present a multi-model neural network method closely related to the human visual system that automatically learns to describe the content of images. Our model consists of two sub-models: an object detection and localization model, which extract the information of objects and their spatial relationship in images respectively; Besides, a deep recurrent neural network (RNN) based on long short-term memory (LSTM) units with attention mechanism for sentences generation. Each word of the description will be automatically aligned to different objects of the input image when it is generated. This is similar to the attention mechanism of the human visual system. Experimental results on the COCO dataset showcase the merit of the proposed method, which outperforms previous benchmark models.

Related articles: Most relevant | Search more
arXiv:1611.02879 [cs.CV] (Published 2016-11-09)
Audio Visual Speech Recognition using Deep Recurrent Neural Networks
arXiv:1904.00767 [cs.CV] (Published 2019-03-18)
Boosted Attention: Leveraging Human Attention for Image Captioning
arXiv:1903.12020 [cs.CV] (Published 2019-03-28)
Describing like humans: on diversity in image captioning