arXiv Analytics

Sign in

arXiv:2501.18269 [cs.CV]AbstractReferencesReviewsResources

MAMS: Model-Agnostic Module Selection Framework for Video Captioning

Sangho Lee, Il Yong Chun, Hogun Park

Published 2025-01-30Version 1

Multi-modal transformers are rapidly gaining attention in video captioning tasks. Existing multi-modal video captioning methods typically extract a fixed number of frames, which raises critical challenges. When a limited number of frames are extracted, important frames with essential information for caption generation may be missed. Conversely, extracting an excessive number of frames includes consecutive frames, potentially causing redundancy in visual tokens extracted from consecutive video frames. To extract an appropriate number of frames for each video, this paper proposes the first model-agnostic module selection framework in video captioning that has two main functions: (1) selecting a caption generation module with an appropriate size based on visual tokens extracted from video frames, and (2) constructing subsets of visual tokens for the selected caption generation module. Furthermore, we propose a new adaptive attention masking scheme that enhances attention on important visual tokens. Our experiments on three different benchmark datasets demonstrate that the proposed framework significantly improves the performance of three recent video captioning models.

Comments: Accepted to the AAAI 2025 Main Technical Track. This is an extended version of the original submission
Categories: cs.CV, cs.AI
Related articles: Most relevant | Search more
arXiv:2501.09532 [cs.CV] (Published 2025-01-16, updated 2025-02-01)
AdaFV: Rethinking of Visual-Language alignment for VLM acceleration
arXiv:2312.08870 [cs.CV] (Published 2023-12-12)
Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens
arXiv:2406.20092 [cs.CV] (Published 2024-06-28)
LLaVolta: Efficient Multi-modal Models via Stage-wise Visual Context Compression