arXiv Analytics

Sign in

arXiv:2108.12375 [cs.CV]AbstractReferencesReviewsResources

A Pedestrian Detection and Tracking Framework for Autonomous Cars: Efficient Fusion of Camera and LiDAR Data

Muhammad Mobaidul Islam, Abdullah Al Redwan Newaz, Ali Karimoddini

Published 2021-08-27Version 1

This paper presents a novel method for pedestrian detection and tracking by fusing camera and LiDAR sensor data. To deal with the challenges associated with the autonomous driving scenarios, an integrated tracking and detection framework is proposed. The detection phase is performed by converting LiDAR streams to computationally tractable depth images, and then, a deep neural network is developed to identify pedestrian candidates both in RGB and depth images. To provide accurate information, the detection phase is further enhanced by fusing multi-modal sensor information using the Kalman filter. The tracking phase is a combination of the Kalman filter prediction and an optical flow algorithm to track multiple pedestrians in a scene. We evaluate our framework on a real public driving dataset. Experimental results demonstrate that the proposed method achieves significant performance improvement over a baseline method that solely uses image-based pedestrian detection.

Related articles: Most relevant | Search more
arXiv:2305.09401 [cs.CV] (Published 2023-05-16)
Diffusion Dataset Generation: Towards Closing the Sim2Real Gap for Pedestrian Detection
arXiv:2204.05799 [cs.CV] (Published 2022-04-12)
EVOPS Benchmark: Evaluation of Plane Segmentation from RGBD and LiDAR Data
arXiv:1812.00876 [cs.CV] (Published 2018-11-18)
Deep Learning based Pedestrian Detection at Distance in Smart Cities