Direkt zum Inhalt springen
Computer Vision Group
Faculty of Informatics
Technical University of Munich

Technical University of Munich

Menu

Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras

Abstract

Stereo DSO is a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, it integrates constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.

Results

For this work we use the KITTI Visual Odometry Benchmark and the Frankfurt sequence of the Cityscapes Dataset for evaluations. The full evaluation results can be found in the supplementary material of our ICCV 2017 paper. Below we show some representative results.

KITTI Visual Odometry Benchmark

The following 4 figures show the average translational and rotational errors with respect to driving intervals (first row) and driving speed (second row) on the KITTI VO testing set. We compare our method with the current state-of-the-art direct and feature-based methods, namely the Stereo LSD-SLAM and ORB-SLAM2. Note that both of the compared methods are SLAM systems with loop closure based on pose graph optimization (ORB-SLAM2 also with global bundle adjustment), while ours is pure visual odometry.

As qualitative results we run our method on all the sequences from the training set and compare the estimated camera trajectories to the provided ground truth. Following are the results on some example sequences. All the estimated camera trajectories can be downloaded here Camera Trajectories.

Update July 2017: After the ICCV 2017 deadline, we extended our method to a SLAM system with additional components for map maintenance, loop detection and loop closure. Our performance on KITTI is further boosted a little, as shown with black plot below. A demonstration video is shown above.

Frankfurt Sequence of Cityscapes

To verify that our method can work with industrial level cameras (high dynamic range, rolling shutter with high pixel read-out speed), we evaluate our method on the Frankfurt sequence from the Cityscapes dataset. We split the sequence to several smaller segments, each with a comparable scale to those sequences from KITTI. The estimated camera trajectories with their alignments to the GPS trajectory are shown below (blue: estimates, red: GPS). Note that the provide GPS coordinates are not accurate.

Some qualitative results on the 3D reconstruction are shown below.



Publications

Journal Articles
2018
Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM (P. Bergmann, R. Wang, D. Cremers), In IEEE Robotics and Automation Letters (RA-L), volume 3, 2018.(This paper was also selected by ICRA'18 for presentation at the conference.[arxiv][video][code][project]) [bib] [pdf]ICRA'18 Best Vision Paper Award - Finalist
Direct Sparse Odometry (J. Engel, V. Koltun, D. Cremers), In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. [bib] [pdf]
Conference and Workshop Papers
2018
Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry (N. Yang, R. Wang, J. Stueckler, D. Cremers), In European Conference on Computer Vision (ECCV), 2018.([arxiv],[supplementary],[video]) [bib]Oral Presentation
LDSO: Direct Sparse Odometry with Loop Closure (X. Gao, R. Wang, N. Demmel, D. Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2018.([arxiv]) [bib]
Direct Sparse Visual-Inertial Odometry using Dynamic Marginalization (L. von Stumberg, V. Usenko, D. Cremers), In International Conference on Robotics and Automation (ICRA), 2018.([supplementary][video][arxiv]) [bib] [pdf]
2017
Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras (R. Wang, M. Schwörer, D. Cremers), In International Conference on Computer Vision (ICCV), 2017.([supplementary][video][arxiv][project]) [bib] [pdf]
2016
Direct Sparse Odometry (J. Engel, V. Koltun, D. Cremers), In arXiv:1607.02565, 2016. [bib] [pdf]
A Photometrically Calibrated Benchmark For Monocular Visual Odometry (J. Engel, V. Usenko, D. Cremers), In arXiv:1607.02555, 2016. [bib] [pdf]
Powered by bibtexbrowser
Export as PDF or BIB

Rechte Seite

Informatik IX
Chair for Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching

info@vision.in.tum.de