arXiv Analytics

Sign in

arXiv:1904.00767 [cs.CV]AbstractReferencesReviewsResources

Boosted Attention: Leveraging Human Attention for Image Captioning

Shi Chen, Qi Zhao

Published 2019-03-18Version 1

Visual attention has shown usefulness in image captioning, with the goal of enabling a caption model to selectively focus on regions of interest. Existing models typically rely on top-down language information and learn attention implicitly by optimizing the captioning objectives. While somewhat effective, the learned top-down attention can fail to focus on correct regions of interest without direct supervision of attention. Inspired by the human visual system which is driven by not only the task-specific top-down signals but also the visual stimuli, we in this work propose to use both types of attention for image captioning. In particular, we highlight the complementary nature of the two types of attention and develop a model (Boosted Attention) to integrate them for image captioning. We validate the proposed approach with state-of-the-art performance across various evaluation metrics.

Related articles: Most relevant | Search more
arXiv:1912.08226 [cs.CV] (Published 2019-12-17)
M$^2$: Meshed-Memory Transformer for Image Captioning
arXiv:2210.10914 [cs.CV] (Published 2022-10-19)
Prophet Attention: Predicting Attention with Future Attention for Improved Image Captioning
arXiv:1706.08474 [cs.CV] (Published 2017-06-26)
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention