{ "id": "2001.05614", "version": "v1", "published": "2020-01-16T02:18:27.000Z", "updated": "2020-01-16T02:18:27.000Z", "title": "Delving Deeper into the Decoder for Video Captioning", "authors": [ "Haoran Chen", "Jianmin Li", "Xiaolin Hu" ], "comment": "8 pages, 3 figures, European Conference on Artificial Intelligence", "categories": [ "cs.CV", "cs.CL" ], "abstract": "Video captioning is an advanced multi-modal task which aims to describe a video clip using a natural language sentence. The encoder-decoder framework is the most popular paradigm for this task in recent years. However, there still exist some non-negligible problems in the decoder of a video captioning model. We make a thorough investigation into the decoder and adopt three techniques to improve the performance of the model. First of all, a combination of variational dropout and layer normalization is embedded into a recurrent unit to alleviate the problem of overfitting. Secondly, a new method is proposed to evaluate the performance of a model on a validation set so as to select the best checkpoint for testing. Finally, a new training strategy called \\textit{professional learning} is proposed which develops the strong points of a captioning model and bypasses its weaknesses. It is demonstrated in the experiments on Microsoft Research Video Description Corpus (MSVD) and MSR-Video to Text (MSR-VTT) datasets that our model has achieved the best results evaluated by BLEU, CIDEr, METEOR and ROUGE-L metrics with significant gains of up to 11.7% on MSVD and 5% on MSR-VTT compared with the previous state-of-the-art models.", "revisions": [ { "version": "v1", "updated": "2020-01-16T02:18:27.000Z" } ], "analyses": { "subjects": [ "68T45", "68T50", "I.2.7" ], "keywords": [ "video captioning", "delving deeper", "microsoft research video description corpus", "captioning model", "natural language sentence" ], "tags": [ "conference paper" ], "note": { "typesetting": "TeX", "pages": 8, "language": "en", "license": "arXiv", "status": "editable" } } }