Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


LSD-SLAM: Large-Scale Direct Monocular SLAM

Contact: Jakob Engel, Prof. Dr. Daniel Cremers

Check out DSO, our new Direct & Sparse Visual Odometry Method published in July 2016, and its stereo extension published in August 2017 here: DSO: Direct Sparse Odometry

LSD-SLAM is a novel, direct monocular SLAM technique: Instead of using keypoints, it directly operates on image intensities both for tracking and mapping. The camera is tracked using direct image alignment, while geometry is estimated in the form of semi-dense depth maps, obtained by filtering over many pixelwise stereo comparisons. We then build a Sim(3) pose-graph of keyframes, which allows to build scale-drift corrected, large-scale maps including loop-closures. LSD-SLAM runs in real-time on a CPU, and even on a modern smartphone.

Code Available (see below)!



Difference to keypoint-based methods


As direct method, LSD-SLAM uses all information in the image, including e.g. edges – while keypoint-based approaches can only use small patches around corners. This leads to higher accuracy and more robustness in sparsely textured environments (e.g. indoors), and a much denser 3D reconstruction. Further, as the proposed piselwise depth-filters incorporate many small-baseline stereo comparisons instead of only few large-baseline frames, there are much less outliers.



Building a global map

(click on the images for full resolution)

LSD-SLAM builds a pose-graph of keyframes, each containing an estimated semi-dense depth map. Using a novel direct image alignment forumlation, we directly track Sim(3)-constraints between keyframes (i.e., rigid body motion + scale), which are used to build a pose-graph which is then optimized. This formulation allows to detect and correct substantial scale-drift after large loop-closures, and to deal with large scale-variation within the same map.



Mobile Implementation

The approach even runs on a smartphone, where it can be used for AR. The estimated semi-dense depth maps are in-painted and completed with an estimated ground-plane, which then allows to implement basic physical interaction with the environment.



Stereo LSD-SLAM

We propose a novel Large-Scale Direct SLAM algorithm for stereo cameras (Stereo LSD-SLAM) that runs in real-time at high frame rate on standard CPUs. See below for the full publication.



Omnidirectional LSD-SLAM

We propose a real-time, direct monocular SLAM method for omnidirectional or wide field-of-view fisheye cameras. Both tracking (direct image alignment) and mapping (pixel-wise distance filtering) are directly formulated for the unified omnidirectional model, which can model central imaging devices with a field of view well above 150°. The dataset used for the evaluation can be found here. See below for the full publication.



Software

LSD-SLAM is on github: http://github.com/tum-vision/lsd_slam

We support only ROS-based build system tested on Ubuntu 12.04 or 14.04 and ROS Indigo or Fuerte. However, ROS is only used for input (video), output (pointcloud & poses) and parameter handling; ROS-dependent code is tightly wrapped and can easily be replaced. To avoid overhead from maintaining different build-systems however, we do not offer an out-of-the-box ROS-free version. Android-specific optimizations and AR integration are not part of the open-source release.

Detailled installation and usage instructions can be found in the README.md, including descriptions of the most important parameters. For best results, we recommend using a monochrome global-shutter camera with fisheye lens.

If you use our code, please cite our respective publications (see below). We are excited to see what you do with LSD-SLAM, if you want drop us a quick hint if you have nice videos / pictures / models / applications.



Datasets

To get you started, we provide some example sequences including the input video and camera calibration, the complete generated pointcloud to be displayed with the lsd_slam_viewer, as well as a (sparsified) pointcloud as .ply, which can be displayed e.g. using meshlab.

Hint: Use rosbag play -r 25 X_pc.bag while the lsd_slam_viewer is running to replay the result of real-time SLAM at 25x speed, building up the full reconstruction whithin seconds.

  • ECCV Sequence (7:00min, 640x480 @ 50fps)



License

LSD-SLAM is released under the GPLv3 license. A professional version under a different licensing agreement intended for commercial use is available here. Please contact us if you are interested.

Related publications


Export as PDF, XML, TEX or BIB

Conference and Workshop Papers
2015
[]Reconstructing Street-Scenes in Real-Time From a Driving Car (V. Usenko, J. Engel, J. Stueckler and D. Cremers), In Proc. of the Int. Conference on 3D Vision (3DV), 2015.  [bibtex] [pdf]
[]Large-Scale Direct SLAM for Omnidirectional Cameras (D. Caruso, J. Engel and D. Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2015.  [bibtex] [pdf] [video]
[]Large-Scale Direct SLAM with Stereo Cameras (J. Engel, J. Stueckler and D. Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2015.  [bibtex] [pdf] [video]
2014
[]Semi-Dense Visual Odometry for AR on a Smartphone (T. Schöps, J. Engel and D. Cremers), In International Symposium on Mixed and Augmented Reality, 2014.  [bibtex] [pdf] [video]Best Short Paper Award
[]LSD-SLAM: Large-Scale Direct Monocular SLAM (J. Engel, T. Schöps and D. Cremers), In European Conference on Computer Vision (ECCV), 2014.  [bibtex] [pdf] [video]Oral Presentation
2013
[]Semi-Dense Visual Odometry for a Monocular Camera (J. Engel, J. Sturm and D. Cremers), In IEEE International Conference on Computer Vision (ICCV), 2013.  [bibtex] [pdf] [video]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More