The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use of most visual SLAM systems in populated real-world environments, which are the target of several relevant applications like service robotics or autonomous vehicles. In this paper we present DynaSLAM, a visual SLAM system that, building on ORB-SLAM2, adds the capabilities of dynamic object detection and background inpainting. DynaSLAM is robust in dynamic scenarios for monocular, stereo and RGB-D configurations. We are capable of detecting the moving objects either by multi-view geometry, deep learning or both. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects. We evaluate our system in public monocular, stereo and RGB-D datasets. We study the impact of several accuracy/speed trade-offs to assess the limits of the proposed methodology. DynaSLAM outperforms the accuracy of standard visual SLAM baselines in highly dynamic scenarios. And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments.
Check some of our results on RGB and depth images from the TUM dataset.
Slide the blue button to see the input and output of our framework.
@inproceedings{bescos2018dynaslam,
author = {Bescos, Berta and F{\'a}cil, Jos{\'e} M. and Civera, Javier and Neira, Jos{\'e}},
title = {{DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes}},
journal = {Robotics and Automation Letters RA-L},
year = {2018}
}