arXiv Analytics

Sign in

arXiv:2004.03044 [cs.CV]AbstractReferencesReviewsResources

When, Where, and What? A New Dataset for Anomaly Detection in Driving Videos

Yu Yao, Xizi Wang, Mingze Xu, Zelin Pu, Ella Atkins, David Crandall

Published 2020-04-06Version 1

Video anomaly detection (VAD) has been extensively studied. However, research on egocentric traffic videos with dynamic scenes lacks large-scale benchmark datasets as well as effective evaluation metrics. This paper proposes traffic anomaly detection with a \textit{when-where-what} pipeline to detect, localize, and recognize anomalous events from egocentric videos. We introduce a new dataset called Detection of Traffic Anomaly (DoTA) containing 4,677 videos with temporal, spatial, and categorical annotations. A new spatial-temporal area under curve (STAUC) evaluation metric is proposed and used with DoTA. State-of-the-art methods are benchmarked for two VAD-related tasks.Experimental results show STAUC is an effective VAD metric. To our knowledge, DoTA is the largest traffic anomaly dataset to-date and is the first supporting traffic anomaly studies across when-where-what perspectives. Our code and dataset can be found in: https://github.com/MoonBlvd/Detection-of-Traffic-Anomaly

Related articles: Most relevant | Search more
arXiv:2104.12102 [cs.CV] (Published 2021-04-25)
Unsupervised Learning of Multi-level Structures for Anomaly Detection
arXiv:2209.13363 [cs.CV] (Published 2022-09-25)
Anomaly Detection in Aerial Videos with Transformers
arXiv:2308.02184 [cs.CV] (Published 2023-08-04)
Synthetic outlier generation for anomaly detection in autonomous driving