arXiv Analytics

Sign in

arXiv:1908.11789 [cs.CV]AbstractReferencesReviewsResources

FisheyeMODNet: Moving Object detection on Surround-view Cameras for Autonomous Driving

Marie Yahiaoui, Hazem Rashed, Letizia Mariotti, Ganesh Sistu, Ian Clancy, Lucie Yahiaoui, Varun Ravi Kumar, Senthil Yogamani

Published 2019-08-30Version 1

Moving Object Detection (MOD) is an important task for achieving robust autonomous driving. An autonomous vehicle has to estimate collision risk with other interacting objects in the environment and calculate an optional trajectory. Collision risk is typically higher for moving objects than static ones due to the need to estimate the future states and poses of the objects for decision making. This is particularly important for near-range objects around the vehicle which are typically detected by a fisheye surround-view system that captures a 360{\deg} view of the scene. In this work, we propose a CNN architecture for moving object detection using fisheye images that were captured in autonomous driving environment. As motion geometry is highly non-linear and unique for fisheye cameras, we will make an improved version of the current dataset public to encourage further research. To target embedded deployment, we design a lightweight encoder sharing weights across sequential images. The proposed network runs at 15 fps on a 1 teraflops automotive embedded system at accuracy of 40% IoU and 69.5% mIoU.

Comments: Accepted for ICCV 2019 Workshop on 360{\deg} Perception and Interaction. A shorter version was presented at IMVIP 2019
Categories: cs.CV, eess.IV
Related articles: Most relevant | Search more
arXiv:1505.00256 [cs.CV] (Published 2015-05-01)
DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving
arXiv:2006.06091 [cs.CV] (Published 2020-06-10)
Autonomous Driving with Deep Learning: A Survey of State-of-Art Technologies
arXiv:1907.08136 [cs.CV] (Published 2019-07-16)
Autonomous Driving in the Lung using Deep Learning for Localization