arXiv Analytics

Sign in

arXiv:2008.07519 [cs.CV]AbstractReferencesReviewsResources

V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction

Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan Zeng, James Tu, Raquel Urtasun

Published 2020-08-17Version 1

In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles. By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints. This allows us to see through occlusions and detect actors at long range, where the observations are very sparse or non-existent. We also show that our approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.

Related articles: Most relevant | Search more
arXiv:2303.09998 [cs.CV] (Published 2023-03-17)
TBP-Former: Learning Temporal Bird's-Eye-View Pyramid for Joint Perception and Prediction in Vision-Centric Autonomous Driving
arXiv:2411.18363 [cs.CV] (Published 2024-11-27)
ChatRex: Taming Multimodal LLM for Joint Perception and Understanding
Qing Jiang et al.
arXiv:2109.07644 [cs.CV] (Published 2021-09-16)
OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication