arXiv Analytics

Sign in

arXiv:1806.05620 [cs.CV]AbstractReferencesReviewsResources

DynSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes

Berta Bescós, José M. Fácil, Javier Civera, José Neira

Published 2018-06-14Version 1

The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use of most visual SLAM systems in populated real-world environments, which are the target of several relevant applications like service robotics or autonomous vehicles. In this paper we present DynSLAM, a visual SLAM system that, building over ORB-SLAM2 [1], adds the capabilities of dynamic object detection and background inpainting. DynSLAM is robust in dynamic scenarios for monocular, stereo and RGB-D configurations. We are capable of detecting the moving objects either by multi-view geometry, deep learning or both. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects. We evaluate our system in public monocular, stereo and RGB-D datasets. We study the impact of several accuracy/speed trade-offs to assess the limits of the proposed methodology. DynSLAM outperforms the accuracy of standard visual SLAM baselines in highly dynamic scenarios. And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments.

Related articles: Most relevant | Search more
arXiv:2404.03210 [cs.CV] (Published 2024-04-04)
HDR Imaging for Dynamic Scenes with Events
arXiv:1201.4895 [cs.CV] (Published 2012-01-23, updated 2013-06-26)
Compressive Acquisition of Dynamic Scenes
arXiv:1704.04394 [cs.CV] (Published 2017-04-14)
DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents