arXiv Analytics

Sign in

arXiv:2311.02538 [cs.CV]AbstractReferencesReviewsResources

Dense Video Captioning: A Survey of Techniques, Datasets and Evaluation Protocols

Iqra Qasim, Alexander Horsch, Dilip K. Prasad

Published 2023-11-05Version 1

Untrimmed videos have interrelated events, dependencies, context, overlapping events, object-object interactions, domain specificity, and other semantics that are worth highlighting while describing a video in natural language. Owing to such a vast diversity, a single sentence can only correctly describe a portion of the video. Dense Video Captioning (DVC) aims at detecting and describing different events in a given video. The term DVC originated in the 2017 ActivityNet challenge, after which considerable effort has been made to address the challenge. Dense Video Captioning is divided into three sub-tasks: (1) Video Feature Extraction (VFE), (2) Temporal Event Localization (TEL), and (3) Dense Caption Generation (DCG). This review aims to discuss all the studies that claim to perform DVC along with its sub-tasks and summarize their results. We also discuss all the datasets that have been used for DVC. Lastly, we highlight some emerging challenges and future trends in the field.

Related articles: Most relevant | Search more
arXiv:2302.14115 [cs.CV] (Published 2023-02-27)
Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning
Antoine Yang et al.
arXiv:2207.11838 [cs.CV] (Published 2022-07-24, updated 2022-10-22)
SAVCHOI: Detecting Suspicious Activities using Dense Video Captioning with Human Object Interactions
arXiv:2506.20583 [cs.CV] (Published 2025-06-25)
Dense Video Captioning using Graph-based Sentence Summarization