arXiv Analytics

Sign in

arXiv:1909.03169 [cs.CV]AbstractReferencesReviewsResources

Look and Modify: Modification Networks for Image Captioning

Fawaz Sammani, Mahmoud Elsayed

Published 2019-09-07Version 1

Attention-based neural encoder-decoder frameworks have been widely used for image captioning. Many of these frameworks deploy their full focus on generating the caption from scratch by relying solely on the image features or the object detection regional features. In this paper, we introduce a novel framework that learns to modify existing captions from a given framework by modeling the residual information, where at each timestep the model learns what to keep, remove or add to the existing caption allowing the model to fully focus on "what to modify" rather than on "what to predict". We evaluate our method on the COCO dataset, trained on top of several image captioning frameworks and show that our model successfully modifies captions yielding better ones with better evaluation scores.

Related articles: Most relevant | Search more
arXiv:1908.06954 [cs.CV] (Published 2019-08-19)
Attention on Attention for Image Captioning
arXiv:2203.15350 [cs.CV] (Published 2022-03-29)
End-to-End Transformer Based Model for Image Captioning
arXiv:1706.08474 [cs.CV] (Published 2017-06-26)
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention