Direkt zum Inhalt springen
Computer Vision Group
TUM Department of Informatics
Technical University of Munich

Technical University of Munich

Menu
Home Research Areas Visual SLAM GN-Net: The Gauss-Newton Loss for Multi-Weather Relocalization

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
research:vslam:gn-net [2019/05/27 14:53]
Lukas von Stumberg
research:vslam:gn-net [2019/09/16 17:42] (current)
Patrick Wenzel
Line 1: Line 1:
-====== GN-Net: The Gauss-Newton Loss for Deep Direct SLAM ======+====== GN-Net: The Gauss-Newton Loss for Multi-Weather Relocalization ​======
 **Contact:​** [[members:​stumberg|Lukas von Stumberg]], [[members:​wenzel]],​ [[members: khamuham]], [[members:​cremers|Prof. Daniel Cremers]] **Contact:​** [[members:​stumberg|Lukas von Stumberg]], [[members:​wenzel]],​ [[members: khamuham]], [[members:​cremers|Prof. Daniel Cremers]]
  
 <​html>​ <​html>​
-<iframe width="​560"​ height="​315"​ src="​https://​www.youtube.com/​embed/​GleEp0PqAv0" frameborder="​0"​ allow="​accelerometer;​ autoplay; encrypted-media;​ gyroscope; picture-in-picture"​ allowfullscreen></​iframe>​+<iframe width="​560"​ height="​315"​ src="​https://​www.youtube.com/​embed/​gcbKeKX2eiE" frameborder="​0"​ allow="​accelerometer;​ autoplay; encrypted-media;​ gyroscope; picture-in-picture"​ allowfullscreen></​iframe>​
 </​html>​ </​html>​
  
 ===== Abstract ===== ===== Abstract =====
-Direct methods ​for SLAM have shown exceptional performance on odometry tasks. However, they still suffer from dynamic lighting/weather changes ​and from a bad initialization on large baselines. To mitigate both of these effects, we propose ​an approach which feeds deep visual descriptors for each pixel as input to the SLAM system. In this work, we introduce ​**GN-Net**: a network optimized with the novel **Gauss-Newton loss** for training deep features. It is designed to maximize the probability of the correct pixel correspondence inside the Gauss-Newton algorithm. This results in features with a larger convergence basin when compared with single-channel grayscale images generally used in SLAM-based approaches. Our network can be trained with ground-truth ​pixel correspondences between ​different ​images, produced either ​from simulation data or by any state-of-the-art SLAM algorithm. We show that our approach is more robust against bad initialization,​ variations in day-time, and weather changes thereby outperforming state-of-the-art direct and indirect methods. Furthermore,​ we release an evaluation benchmark for what we refer to as relocalization tracking. It has been created using the CARLA simulator as well as sequences taken from the Oxford RobotCar Dataset.+Direct ​SLAM methods have shown exceptional performance on odometry tasks. However, they are susceptible to dynamic lighting ​and weather changes ​while also suffering ​from a bad initialization on large baselines. To overcome this, we propose **GN-Net**: a network optimized with the novel **Gauss-Newton loss** for training ​weather invariant ​deep features, tailored for direct image alignment. Our network can be trained with pixel correspondences between images ​even from different sequences. Experiments on both simulated and real-world datasets demonstrate ​that our approach is more robust against bad initialization,​ variations in day-time, and weather changes thereby outperforming state-of-the-art direct and indirect methods. Furthermore,​ we release an evaluation benchmark for relocalization tracking ​against different types of weather.
  
 ===== Downloads ===== ===== Downloads =====
-The paper can be downloaded at: http://​arxiv.org/​abs/​1904.11932 \\ +The **paper** can be downloaded at: https://​arxiv.org/​abs/​1904.11932 \\ 
-The video is available at: [[https://​youtu.be/​GleEp0PqAv0|https://​youtu.be/​GleEp0PqAv0]] +The **video** is available at: [[https://​youtu.be/​gcbKeKX2eiE|https://​youtu.be/​gcbKeKX2eiE]] \\ 
- +The **supplementary** can be downloaded at: {{ :​research:​vslam:​gn-net:​gn-net-supplementary.pdf |}} \\ 
-**The dataset will be published soon.**+The **relocalization tracking benchmark dataset** can be downloaded at:  
 +[[https://​vision.in.tum.de/​webshare/​g/​gn-net-benchmark/​gnnet_benchmark_v1.0.zip|gnnet_benchmark_v1.0.zip]]
  
 <​bibtex>​ <​bibtex>​
 <​keywords>​gn-net</​keywords>​ <​keywords>​gn-net</​keywords>​
 </​bibtex>​ </​bibtex>​

Rechte Seite

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching

info@vision.in.tum.de