DynaSLAM
Teaser Image

DynaSLAM

DynaSLAM

DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes

Berta Bescos, Jose M. Facil, Javier Civera, Jose Neira

University of Zaragoza, Spain

RA-L and IROS 2018

Abstract

The assumption of scene rigidity is typical in SLAM algorithms. Such a strong assumption limits the use of most visual SLAM systems in populated real-world environments, which are the target of several relevant applications like service robotics or autonomous vehicles. In this paper we present DynaSLAM, a visual SLAM system that, building on ORB-SLAM2, adds the capabilities of dynamic object detection and background inpainting. DynaSLAM is robust in dynamic scenarios for monocular, stereo and RGB-D configurations. We are capable of detecting the moving objects either by multi-view geometry, deep learning or both. Having a static map of the scene allows inpainting the frame background that has been occluded by such dynamic objects. We evaluate our system in public monocular, stereo and RGB-D datasets. We study the impact of several accuracy/speed trade-offs to assess the limits of the proposed methodology. DynaSLAM outperforms the accuracy of standard visual SLAM baselines in highly dynamic scenarios. And it also estimates a map of the static parts of the scene, which is a must for long-term applications in real-world environments.

Check some of our results on RGB and depth images from the TUM dataset. Slide the blue button to see the input and output of our framework.

Try Our Code

C++: you can clone our GitHub repository here

Check Out Our Video

Paper

You can download the journal from HERE, and the arXiv paper from HERE. When using this paper in your research, we will be happy if you cite us :)

@inproceedings{bescos2018dynaslam,
  author = {Bescos, Berta and F{\'a}cil, Jos{\'e} M. and Civera, Javier and Neira, Jos{\'e}},
  title = {{DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes}},
  journal = {Robotics and Automation Letters RA-L},
  year = {2018}
}