Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
members:yangn [2019/01/27 12:25]
Nan Yang
members:yangn [2022/03/28 06:48] (current)
Nan Yang
Line 4: Line 4:
 </member> </member>
  
-==== News ==== +<html><center><a href="https://nan-yang.me/" target="_blank"><h2 style="color:blue;text-decoration: underline;">>>Personal Website<<</h2></a></center></html> 
-  * [07.2018One paper accepted for ECCV 2018, Munich -- ** oral *(2.5%, [[https://youtu.be/2_nDLpGtY1Y|link]])+  * [12.2021] [[https://vision.in.tum.de/research/vslam/tandem|TANDEM]] won the [[https://3dv2021.surrey.ac.uk/prizes/|Best Demo Award]] at //3DV 2021//! 
-  * [06.2018One paper accepted for IROS 2018, Madrid+  * [09.2021] [[https://vision.in.tum.de/research/vslam/tandem|TANDEM]] accepted at //CoRL// 2021. Check out the [[https://arxiv.org/abs/2111.07418|paper]] and [[https://github.com/tum-vision/tandem|code]]. 
-  * [05.2018One paper accepted for publication in RA-L.+  [06.2021] [[https://vision.in.tum.de/research/monorec|MonoRec]](CVPR 21') code released on [[https://github.com/Brummi/MonoRec|Github]]. 
 +  [05.2021] Received //Outstanding Reviewer// for [[http://cvpr2021.thecvf.com/node/184|CVPR 2021]]. 
 +  [04.2021] [[https://www.4seasons-dataset.com/|4Seasons]] dataset is now public
 +  * [12.2020] Finished the internship at Facebook Reality Labs where I worked on collaborative mapping. 
 +  * [10.2020] [[https://vision.in.tum.de/research/vslam/lm-reloc|LM-Reloc]] accepted at //3DV// 2020. 
 +  * [09.2020] Started the internship at //Facebook Reality Labs//
 +  * [05.2020Co-organized [[https://sites.google.com/view/mlad-eccv2020|Map-based Localization for Autonomous Driving Workshop]], //ECCV// 2020. 
 +  * [02.2020] [[https://vision.in.tum.de/research/vslam/d3vo|D3VO]] accepted as an oral presentation at //CVPR// 2020.
  
-==== Research Interests ==== +===== Brief Bio ===== 
-My research interests lie in incorporating deep learning into visual odometry and SLAM.+Find me on [[https://scholar.google.de/citations?user=pUj2ffwAAAAJ&hl=en|Google Scholar]], [[https://www.linkedin.com/in/nan-yang-089aa8aa/|Linkedin]], [[https://twitter.com/NanYang719|Twitter]].
  
-** Deep Virtual Stereo Odometry (DVSO)** -- [[:research:vslam:dvso|Project Page]]+I received my Bachelor's degree in Computer Science from Beijing University of Posts and Telecommunications and my Master's degree in Informatics from the Technical University of Munich. Since May 2018, I am a Ph.D. student and senior computer vision researcher in [[https://www.artisense.ai/|Artisense]], a startup co-founded by [[:members:cremers|Prof. Daniel Cremers]]. From September 2020 until December 2020, I was an intern in Facebook Reality Labs working on collaborative mapping. 
 + 
 +===== Research ===== 
 +My research interest lies in enhancing classical 3D vision, e.g., visual odometry / simultaneously localization and mapping (SLAM), re-localization, and dense reconstruction, with the aid of deep neural networks. Here are some selected projects: 
 + 
 +==== Visual Odometry ==== 
 +  * **Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry**, //ECCV// 2018, Oral Presentation.
 <html><center><iframe width="640" height="360" <html><center><iframe width="640" height="360"
 src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe> src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +  * **D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry**, //CVPR// 2020, Oral Presentation.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/a7CAkJbhcm8" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +  * **Multi-Frame GAN: Image Enhancement for Stereo Visual Odometry in Low Light
 +**, //CoRL// 2019, Long Oral Presentation.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/OdYkFuZv204" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +
 +==== Dense Reconstruction ====
 +  * **MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera**, //CVPR// 2021.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/-gDSBIm0vgk" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +  * *TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo*, //CoRL// 2021, **<fc #ff0000>Best Demo Award</fc>** at //3DV// 2021.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/L4C8Q6Gvl1w" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +
 +==== Re-localization ====
 +  * **LM-Reloc: Levenberg-Marquardt Based Direct Visual Relocalization**, //3DV// 2020.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/i7TyTwKD734" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +
 +==== Object-level Perceptions ====
 +  * **DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation**, //ICRA// 2020.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/QqP6zdx5OKw" frameborder="0" allowfullscreen></iframe>
 +</center></html>
 +<html><br /></html>
 +
 +  * **Learning Monocular 3D Vehicle Detection without 3D Bounding Box Labels**, //GCPR// 2020.
 +<html><center><iframe width="640" height="360"
 +src="https://www.youtube.com/embed/Qxj0-jASHUg" frameborder="0" allowfullscreen></iframe>
 </center></html> </center></html>
 <html><br /></html> <html><br /></html>
  
-==== Brief Bio ==== +==== Professional Services ==== 
-Nan Yang received his Bachelor's degree in Computer Science from Beijing University of Posts and Telecommunications and his Master's degree in Informatics from the Technical University of Munich. Since May 2018he is a PhD student and senior computer vision & AI researcher in [[https://www.artisense.ai/|Artisense]], a startup co-founded by [[:members:cremers|ProfDaniel Cremers]].+  * Journal reviewer: RA-L, AURO, ISPRS 
 +  * Conference reviewer: CVPR, ECCV, ICCV, ICLR, AAAI, ICRA, IROS 
 +  * Co-organized Map-based Localization for Autonomous Driving Workshop, [[https://sites.google.com/view/mlad-eccv2020|ECCV 2020]] and [[https://sites.google.com/view/mlad-iccv2021|ICCV 2021]].
  
 ==== Publications ==== ==== Publications ====

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More