DynaPix SLAM: A Pixel-Based Dynamic Visual SLAM Approach
Chenghao Xu1,3,* Elia Bonetto1,2,* Aamir Ahmad2,1
1Max Planck Institute for Intelligent Systems 2University of Stuttgart 3Swiss Federal Institute of Technology Lausanne
* Co-First and Corresponding Authors
Visual Simultaneous Localization and Mapping (V-SLAM) methods achieve remarkable performance in static environments but face challenges in scenes with moving objects that severely affect their core modules. To avoid that, Dynamic V-SLAM approaches often apply semantic information, geometric constraints, or optical flow to exclude dynamic elements. However, various factors can limit such methods, including inaccurate flow estimations and reliance on precise segmentation. Additionally, predefined thresholds and the a-priori inclusion of selected classes, along with the inability to recognize unknown or unexpected moving objects, often degrade their performance. To address this, we introduce DynaPix, a novel visual SLAM system based on per-pixel motion probability estimation. Our approach consists of a new semantic-free estimation module and an improved pose optimization process. Our per-pixel motion probability estimation is achieved through a novel static background differencing method on both images and optical flows from splatted frames. DynaPix fully integrates those motion probabilities into the map point selection and weighted bundle adjustment within the tracking and optimization modules of ORB-SLAM2.
Citation
@misc{xu2023dynapix,
title={DynaPix SLAM: A Pixel-Based Dynamic SLAM Approach},
author={Chenghao Xu and Elia Bonetto and Aamir Ahmad},
year={2023},
eprint={2309.09879},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Contact
Questions: dynapix@tue.mpg.de
Licensing: ps-licensing@tue.mpg.de