arXiv Analytics

Sign in

arXiv:1811.10742 [cs.CV]AbstractReferencesReviewsResources

Joint Monocular 3D Vehicle Detection and Tracking

Hou-Ning Hu, Qi-Zhi Cai, Dequan Wang, Ji Lin, Min Sun, Philipp Krähenbühl, Trevor Darrell, Fisher Yu

Published 2018-11-26Version 1

3D vehicle detection and tracking from a monocular camera requires detecting and associating vehicles, and estimating their locations and extents together. It is challenging because vehicles are in constant motion and it is practically impossible to recover the 3D positions from a single image. In this paper, we propose a novel framework that jointly detects and tracks 3D vehicle bounding boxes. Our approach leverages 3D pose estimation to learn 2D patch association overtime and uses temporal information from tracking to obtain stable 3D estimation. Our method also leverages 3D box depth ordering and motion to link together the tracks of occluded objects. We train our system on realistic 3D virtual environments, collecting a new diverse, large-scale and densely annotated dataset with accurate 3D trajectory annotations. Our experiments demonstrate that our method benefits from inferring 3D for both data association and tracking robustness, leveraging our dynamic 3D tracking dataset.

Related articles:
arXiv:2405.20323 [cs.CV] (Published 2024-05-30)
$\textit{S}^3$Gaussian: Self-Supervised Street Gaussians for Autonomous Driving
Nan Huang et al.
arXiv:2109.07644 [cs.CV] (Published 2021-09-16)
OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication