Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
research:vslam:dh3d [2020/07/30 20:04] Rui Wang |
research:vslam:dh3d [2020/11/11 20:24] (current) Rui Wang |
||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ~~NOCACHE~~ | ||
====== DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization ====== | ====== DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization ====== | ||
**Contact: | **Contact: | ||
- | This page is still under construction. | + | {{ : |
===== Abstract ===== | ===== Abstract ===== | ||
For relocalization in large-scale point clouds, we propose the first approach that unifies global place recognition and local 6DoF pose refinement. To this end, we design a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points. It integrates FlexConv and Squeeze-and-Excitation (SE) to assure that the learned local descriptor captures multi-level geometric information and channel-wise relations. For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner. We generate the global descriptor by directly aggregating the learned local descriptors with an effective attention mechanism. In this way, local and global 3D descriptors are inferred in one single forward pass. Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and for local point cloud registration in comparison to state-of-the-art approaches. To validate the generalizability and robustness of our 3D keypoints, we demonstrate that our method also performs favorably without fine-tuning on registration of point clouds that were generated by a visual SLAM system. | For relocalization in large-scale point clouds, we propose the first approach that unifies global place recognition and local 6DoF pose refinement. To this end, we design a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points. It integrates FlexConv and Squeeze-and-Excitation (SE) to assure that the learned local descriptor captures multi-level geometric information and channel-wise relations. For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner. We generate the global descriptor by directly aggregating the learned local descriptors with an effective attention mechanism. In this way, local and global 3D descriptors are inferred in one single forward pass. Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and for local point cloud registration in comparison to state-of-the-art approaches. To validate the generalizability and robustness of our 3D keypoints, we demonstrate that our method also performs favorably without fine-tuning on registration of point clouds that were generated by a visual SLAM system. | ||
- | ===== Paper Summary | + | ===== Update |
- | The video is with audio. | + | * [11.11.2020] For the ease of comparison, we upload the numbers used to draw the plots in the paper ({{:research: |
- | < | + | |
===== ECCV Spotlight Presentation ===== | ===== ECCV Spotlight Presentation ===== | ||
Line 15: | Line 15: | ||
< | < | ||
- | ===== Download | + | ===== Citation ===== |
+ | If you find our work useful in your research, please consider citing: | ||
+ | < | ||
+ | @inproceedings{du2020dh3d, | ||
+ | title={DH3D: | ||
+ | author={Du, Juan and Wang, Rui and Cremers, Daniel}, | ||
+ | booktitle={European Conference on Computer Vision (ECCV)}, | ||
+ | year={2020} | ||
+ | } | ||
+ | </ | ||
+ | |||
+ | ===== Code ===== | ||
+ | Code and the pre-trained models can be accessed from the [[https:// | ||
+ | |||
+ | ===== Datasets ===== | ||
+ | Our model is mainly trained and tested on the LiDAR point clouds from the [[https:// | ||
+ | |||
+ | < | ||
+ | readfile("/ | ||
+ | </ | ||
+ | |||
+ | ===== Qualitative Results ===== | ||
+ | {{ : | ||
+ | |||
+ | {{ : | ||
+ | {{ : | ||
+ | |||
+ | ===== Other Materials | ||
* Paper: {{: | * Paper: {{: | ||
* arXiv: [[https:// | * arXiv: [[https:// | ||
- | * Code: Will be online before the start of ECCV 2020. | ||
* Paper summary slides: {{: | * Paper summary slides: {{: | ||
+ | * Paper summary video: [[https:// | ||
* ECCV spotlight presentation slides: {{: | * ECCV spotlight presentation slides: {{: | ||
+ | * The numbers used to draw the plots (Fig.4 and Fig. 6) in the paper: {{: | ||
==== Publications ==== | ==== Publications ==== |