Direct RGB-D SLAM
Contact: Lingni Ma, Robert Maier.
Our RGB-D SLAM system builds upon our Direct RGB-D Odometry (see below). It extends the odometry approach to include a geometric error term and perform frame-to-keyframe matching. Each new keyframe is inserted into a pose graph. Additionally we search for loop closures to older keyframes. These loop closures provide additional constraints for the pose graph. The graph is incrementally optimized using the g2o framework. The output of the SLAM system are metrically consistent poses for all frame.
For source code and basic documentation visit the Github repository.
Direct RGB-D Odometry
In contrast to feature-based algorithms, the approach uses all pixels of two consecutive RGB-D images to estimate the camera motion. The implementation runs in realtime on a recent CPU.
For source code and basic documentation visit the Github repository.
Related Publications
Export as PDF, XML, TEX or BIB
Conference and Workshop Papers
2015
[] Dense Continuous-Time Tracking and Mapping with Rolling Shutter RGB-D Cameras , In IEEE International Conference on Computer Vision (ICCV), 2015. ([video][supplementary][datasets])
2013
[] Dense Visual SLAM for RGB-D Cameras , In Proc. of the Int. Conf. on Intelligent Robot Systems (IROS), 2013.
[] Robust Odometry Estimation for RGB-D Cameras , In International Conference on Robotics and Automation (ICRA), 2013.
Best Vision Paper Award - Finalist
2011
[] Real-Time Visual Odometry from Dense RGB-D Images , In Workshop on Live Dense Reconstruction with Moving Cameras at the Intl. Conf. on Computer Vision (ICCV), 2011.
Other Publications
2012
[] Odometry from RGB-D Cameras for Autonomous Quadrocopters , Master's thesis, Technical University Munich, 2012.