arXiv Analytics

Sign in

arXiv:1605.08140 [cs.CV]AbstractReferencesReviewsResources

Temporal attention filters for human activity recognition in videos

AJ Piergiovanni, Chenyou Fan, Michael S. Ryoo

Published 2016-05-26Version 1

In this paper, we newly introduce the concept of temporal attention filters, and describe how they can be used for human activity recognition from videos. Many high-level activities are often composed of multiple temporal parts (e.g., sub-events) with different duration/speed, and our objective is to make the model explicitly consider such temporal structure using multiple temporal filters. Our attention filters are designed to be fully differentiable, allowing end-of-end training of the temporal filters together with the underlying frame-based or segment-based convolutional neural network architectures. The paper not only presents an approach of learning optimal static temporal attention filters to be shared across different videos, but also describes an approach of dynamically adjusting attention filters per testing video using recurrent long short-term memory networks (LSTMs). We experimentally confirm that the proposed concept of temporal attention filters benefits the activity recognition tasks by capturing the temporal structure in videos.

Related articles: Most relevant | Search more
arXiv:1502.06075 [cs.CV] (Published 2015-02-21)
A new network-based algorithm for human activity recognition in video
arXiv:2501.08471 [cs.CV] (Published 2025-01-14)
Benchmarking Classical, Deep, and Generative Models for Human Activity Recognition
arXiv:1707.09725 [cs.CV] (Published 2017-07-31)
Analysis and Optimization of Convolutional Neural Network Architectures