arXiv Analytics

Sign in

arXiv:2109.07644 [cs.CV]AbstractReferencesReviewsResources

OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication

Runsheng Xu, Hao Xiang, Xin Xia, Xu Han, Jinlong Liu, Jiaqi Ma

Published 2021-09-16Version 1

Employing Vehicle-to-Vehicle communication to enhance perception performance in self-driving technology has attracted considerable attention recently; however, the absence of a suitable open dataset for benchmarking algorithms has made it difficult to develop and assess cooperative perception technologies. To this end, we present the first large-scale open simulated dataset for Vehicle-to-Vehicle perception. It contains over 70 interesting scenes, 111,464 frames, and 232,913 annotated 3D vehicle bounding boxes, collected from 8 towns in CARLA and a digital town of Culver City, Los Angeles. We then construct a comprehensive benchmark with a total of 16 implemented models to evaluate several information fusion strategies~(i.e. early, late, and intermediate fusion) with state-of-the-art LiDAR detection algorithms. Moreover, we propose a new Attentive Intermediate Fusion pipeline to aggregate information from multiple connected vehicles. Our experiments show that the proposed pipeline can be easily integrated with existing 3D LiDAR detectors and achieve outstanding performance even with large compression rates. To encourage more researchers to investigate Vehicle-to-Vehicle perception, we will release the dataset, benchmark methods, and all related codes in https://mobility-lab.seas.ucla.edu/opv2v/.

Related articles: Most relevant | Search more
arXiv:2008.07519 [cs.CV] (Published 2020-08-17)
V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction
arXiv:2405.20323 [cs.CV] (Published 2024-05-30)
$\textit{S}^3$Gaussian: Self-Supervised Street Gaussians for Autonomous Driving
Nan Huang et al.
arXiv:1811.10742 [cs.CV] (Published 2018-11-26)
Joint Monocular 3D Vehicle Detection and Tracking
Hou-Ning Hu et al.