Direkt zum Inhalt springen
Computer Vision Group
Faculty of Informatics
Technical University of Munich

Technical University of Munich

Menu
Home Research Visual SLAM DSO: Direct Sparse Odometry

DSO: Direct Sparse Odometry

Contact: Jakob Engel, Prof. Vladlen Koltun, Prof. Daniel Cremers

November 12., 2016: Code released! See below.

Abstract

DSO is a novel direct and sparse formulation for Visual Odometry. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry - represented as inverse depth in a reference frame - and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. DSO does not depend on keypoint detectors or descriptors, thus it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on mostly white walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.

Datatset

Please see here for the TUM monoVO dataset, used for large parts of the evaluation and the above video. It contains over 2h of video and respective evaluation / benchmarking metrics / tools.

Supplementary Material

Supplementary material with all ORB-SLAM and DSO results presented in the paper can be downloaded from here: zip (2.7GB). We further provide ready-to-use Matlab scripts to reproduce all plots in the paper from the above archive, which can be downloaded here: zip (30MB)

14.10.2016.: We have updated the supplementary material with the fixed real-time results for ORB-SLAM, corresponding to the revised version of the papers.

Open-Source Code

The full source code is available on Github under GPLv3: https://github.com/JakobEngel/dso This main project is meant to run on datasets in the TUM monoVO dataset format (i.e., not with a live camera).

We also provide a minimalistic example (200 lines of c++ code) how to integrate DSO to work with a live camera, using ROS for video capture: https://github.com/JakobEngel/dso_ros. Feel free to your use-case / camera capture environment / ROS version.

Note that as for LSD-SLAM, we use a dual-licensing model; Please contact Jakob Engel or Prof. Daniel Cremers for details on commercial licensing.



Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras

Contact: Rui Wang, Prof. Daniel Cremers

Abstract

Stereo DSO is a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, it integrates constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.

Datatset

For this work we use the KITTI Visual Odometry Benchmark and the Frankfurt sequence of the Cityscapes Dataset for evaluations. The full evaluation results can be found in the supplementary material of our ICCV 2017 paper (see below).

Open-Source Code

Under discussion.



Publications

Conference and Workshop Papers
2017
Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM (P. Bergmann, R. Wang, D. Cremers), In arXiv:1710.02081, 2017.([arxiv][video] code coming soon) [bib] [pdf]
Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras (R. Wang, M. Schwörer, D. Cremers), In International Conference on Computer Vision (ICCV), 2017.([supplementary][video][arxiv]) [bib] [pdf]
Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias and Rolling Shutter Effect (N. Yang, R. Wang, X. Gao, D. Cremers), In arXiv:1705.04300, 2017.(Improved and extended version.[arxiv]) [bib] [pdf]
2016
Direct Sparse Odometry (J. Engel, V. Koltun, D. Cremers), In arXiv:1607.02565, 2016. [bib] [pdf]
A Photometrically Calibrated Benchmark For Monocular Visual Odometry (J. Engel, V. Usenko, D. Cremers), In arXiv:1607.02555, 2016. [bib] [pdf]
Powered by bibtexbrowser
Export as PDF or BIB

Rechte Seite

Informatik IX
Chair for Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching

info@vision.in.tum.de