Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links


MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Contact: Nan Yang, Lukas Von Stumberg, Niclas Zeller

[June 14., 2021] Code released! See below.



Code: https://github.com/Brummi/MonoRec

Abstract

In this paper, we propose MonoRec, a semi-supervised monocular dense reconstruction architecture that predicts depth maps from a single moving camera in dynamic environments. MonoRec is based on a multi-view stereo setting which encodes the information of multiple consecutive images in a cost volume. To deal with dynamic objects in the scene, we introduce a MaskModule that predicts moving object masks by leveraging the photometric inconsistencies encoded in the cost volumes. Unlike other multi-view stereo methods, MonoRec is able to predict accurate depths for both static and moving objects by leveraging the predicted masks. Furthermore, we present a novel multi-stage training scheme with a semi-supervised loss formulation that does not require LiDAR depth values. We carefully evaluate MonoRec on the KITTI dataset and show that it achieves state-of-the-art performance compared to both multi-view and single-view methods. With the model trained on KITTI, we further demonstrate that MonoRec is able to generalize well to both the Oxford RobotCar dataset and the more challenging TUM-Mono dataset recorded by a handheld camera.

Code

The source code of MonoRec has been released on GitHub under the flexible MIT License: https://github.com/Brummi/MonoRec.

Publications


Export as PDF, XML, TEX or BIB

Conference and Workshop Papers
2021
[]MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera (F. Wimbauer, N. Yang, L. von Stumberg, N. Zeller and D Cremers), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. ([project page]) [bibtex] [arXiv:2011.11814]
2020
[]D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry (N. Yang, L. von Stumberg, R. Wang and D. Cremers), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.  [bibtex] [arXiv:2003.01060] [pdf]Oral Presentation
2018
[]Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry (N. Yang, R. Wang, J. Stueckler and D. Cremers), In European Conference on Computer Vision (ECCV), 2018. ([arxiv],[supplementary],[project]) [bibtex]Oral Presentation
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023.

31.08.2022

Fulbright PULSE podcast on Prof. Cremers went online on Apple Podcasts and Spotify.

17.07.2022

MCML Kick-Off

On July 27th, we are organizing the Kick-Off of the Munich Center for Machine Learning in the Bavarian Academy of Sciences.

17.07.2022

AI Symposium

On July 22nd 2022, we are organizing a Symposium on AI within the Technology Forum of the Bavarian Academy of Sciences.

More