Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
members:steinbrf [2014/02/18 16:48] steinbrf |
members:steinbrf [2014/02/18 17:00] (current) steinbrf |
||
---|---|---|---|
Line 12: | Line 12: | ||
=== Visual Odometry === | === Visual Odometry === | ||
- | At ICCV 2011 I published a method for getting a camera pose estimation from RGBD-Images. | + | At ICCV 2011 we published a method for getting a camera pose estimation from RGBD-Images. |
In the video below, the Kinect camera is moving in a static scene and the camera poses are being accurately estimated. | In the video below, the Kinect camera is moving in a static scene and the camera poses are being accurately estimated. | ||
Line 18: | Line 18: | ||
=== Dense Mapping of large RGB-D Sequences === | === Dense Mapping of large RGB-D Sequences === | ||
+ | In our publication at ICCV 2013 I describe a method for the volumetric fusion of large RGB-D sequences. The video below shows the mesh visualization of our office floor, a scene computed from more than 24.000 RGB-D images captured with the Asus Xtion sensor. The reconstruction run at more than 200 Hz on a GTX680. The finest resolution was 5mm and the entire scene fit into approximately 2.5 GB of GPU RAM, including color. | ||
< | < | ||
+ | |||
+ | While the method published at ICCV 2013 required a GPU to run in real-time, in our paper published at ICRA 2014, we demonstrated that the mapping part of dense volumetric RGB-D image fusion also works on a single standard CPU core at camera speed. Furthermore, | ||
< | < |