arXiv Analytics

Sign in

arXiv:2001.05614 [cs.CV]AbstractReferencesReviewsResources

Delving Deeper into the Decoder for Video Captioning

Haoran Chen, Jianmin Li, Xiaolin Hu

Published 2020-01-16Version 1

Video captioning is an advanced multi-modal task which aims to describe a video clip using a natural language sentence. The encoder-decoder framework is the most popular paradigm for this task in recent years. However, there still exist some non-negligible problems in the decoder of a video captioning model. We make a thorough investigation into the decoder and adopt three techniques to improve the performance of the model. First of all, a combination of variational dropout and layer normalization is embedded into a recurrent unit to alleviate the problem of overfitting. Secondly, a new method is proposed to evaluate the performance of a model on a validation set so as to select the best checkpoint for testing. Finally, a new training strategy called \textit{professional learning} is proposed which develops the strong points of a captioning model and bypasses its weaknesses. It is demonstrated in the experiments on Microsoft Research Video Description Corpus (MSVD) and MSR-Video to Text (MSR-VTT) datasets that our model has achieved the best results evaluated by BLEU, CIDEr, METEOR and ROUGE-L metrics with significant gains of up to 11.7% on MSVD and 5% on MSR-VTT compared with the previous state-of-the-art models.

Comments: 8 pages, 3 figures, European Conference on Artificial Intelligence
Categories: cs.CV, cs.CL
Subjects: 68T45, 68T50, I.2.7
Related articles: Most relevant | Search more
arXiv:2209.13853 [cs.CV] (Published 2022-09-28)
Thinking Hallucination for Video Captioning
arXiv:2212.11109 [cs.CV] (Published 2022-12-11)
MAViC: Multimodal Active Learning for Video Captioning
arXiv:1711.11135 [cs.CV] (Published 2017-11-29)
Video Captioning via Hierarchical Reinforcement Learning