Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
research:vslam:dvso [2018/07/10 10:35]
Nan Yang
research:vslam:dvso [2018/08/25 11:09]
Nan Yang
Line 1: Line 1:
-====== Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry ======+===== Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry ===== 
 + 
 +**Contact:** [[members:yangn|Nan Yang]], [[members:wangr|Rui Wang]], [[https://www.is.mpg.de/person/jstueckler|Jörg Stückler]], [[members:cremers|Prof. Daniel Cremers]] 
 + 
 +<html><center><iframe width="640" height="360" 
 +src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe> 
 +</center></html> 
 +<html><br /></html> 
 + 
 +==== Abstract ==== 
 +Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. In this paper, we propose to leverage deep monocular depth prediction to overcome limitations of geometry-based monocular visual odometry. To this end, we incorporate deep depth predictions into [[:research:vslam:dso|DSO]] as direct virtual stereo measurements. For depth prediction, we design a novel deep network that refines predicted depth from a single image in a two-stage process. We train our network in a semi-supervised way on photoconsistency in stereo images and on consistency with accurate sparse depth reconstructions from [[:research:vslam:stereo-dso|Stereo DSO]]. Our deep predictions excel state-of-the-art approaches for monocular depth on the KITTI benchmark. Moreover, our Deep Virtual Stereo Odometry clearly exceeds previous monocular and deep-learning based methods in accuracy. It even achieves comparable performance to the state-of-the-art stereo methods, while only relying on a single camera. 
 + 
 +{{:research:vslam:dvso:teaser_pic.png?640|}} 
 + 
 +==== Semi-Supervised Deep Monocular Depth Estimation ==== 
 +We propose a semi-supervised approach to deep monocular depth estimation. It builds on three key ingredients: self-supervised learning from photoconsistency in a stereo setup, supervised learning based on accurate sparse depth reconstruction by Stereo DSO, and **StackNet**, a two-stage network with a stacked encoder-decoder architecture. 
 + 
 +{{:research:vslam:dvso:simp_network_new.png?640|}} 
 + 
 +==== Deep Virtual Stereo Odometry ==== 
 +Deep Virtual Stereo Odometry (**DVSO**) builds on the windowed sparse direct bundle adjustment formulation of monocular DSO. We use our disparity predictions for DSO in two key ways: Firstly, we initialize depth maps of new keyframes from the disparities. Beyond this rather straightforward approach, we also incorporate virtual direct image alignment constraints into the windowed direct bundle adjustment of DSO. We obtain these constraints by warping images with the estimated depth by bundle adjustment and the predicted right disparities by our network assuming a virtual stereo setup. 
 + 
 +{{:research:vslam:dvso:system_overview_long_new.png?640|}} 
 + 
 +==== Results ==== 
 +We quantitatively evaluate our StackNet with other state-of-the-art monocular depth prediction methods on the publicly available KITTI dataset. For DVSO, we evaluate its tracking accuracy on the KITTI odometry benchmark with other state-of-the-art monocular as well as stereo visual odometry systems. In the [[:research:vslam:dvso|supplementary material]], we also show the generalization ability of StackNet as well as DVSO. 
 + 
 +=== Monocular Depth Estimation === 
 + 
 +{{:research:vslam:dvso:depth_table.png?640|}} 
 + 
 +{{:research:vslam:dvso:depth_comparison.png?640|}} 
 + 
 +=== Monocular Visual Odometry === 
 + 
 +{{:research:vslam:dvso:vo_table.png?640|}} 
 + 
 +{{:research:vslam:dvso:vo_error.png?640|}} 
 + 
 +{{:research:vslam:dvso:traj_01.png?640|}} 
 + 
 +==== Publications ==== 
 +<bibtex> 
 +<keywords>dvso</keywords> 
 +</bibtex>
  
-**Contact:** [[members:yangn|Nan Yang]] 

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More