Direkt zum Inhalt springen
Computer Vision Group
TUM Department of Informatics
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
CVG Group DVL Group

News

10.12.2020

Frank Dellaert (Georgia Tech) will give a talk in the TUM AI lecture series on Dec 17th, 4pm! Livestream

15.10.2020

Jon Barron (Google) will give a talk in the TUM AI lecture series on Oct 22nd, 9pm! Livestream

02.10.2020

We have five papers accepted to 3DV 2020!

30.09.2020

Our effcient deep network architectures form the AI engine of the project Slow Down COVID-19 at Harvard.

24.07.2020

Our practical course "Vision-based Navigation" (WS18, SS19) by Dr. Vladyslav Usenko and Nikolaus Demmel was honored as best practical course in the academic year 2018/2019 by the department for Informatics.

More


Useful tools for the RGB-D benchmark

We provide a set of tools that can be used to pre-process the datasets and to evaluate the SLAM/tracking results. The scripts can be downloaded here.

To checkout the repository using subversion, run

svn checkout https://svncvpr.in.tum.de/cvpr-ros-pkg/trunk/rgbd_benchmark/rgbd_benchmark_tools

Associating color and depth images

The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.

For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.

sh: 1: /usr/www/html-data/sturmju/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/associate.py: not found

Evaluation

After estimating the camera trajectory of the Kinect and saving it to a file, we need to evaluate the error in the estimated trajectory by comparing it with the ground-truth. There are different error metrics. Two prominent methods is the absolute trajectory error (ATE) and the relative pose error (RPE). The ATE is well-suited for measuring the performance of visual SLAM systems. In contrast, the RPE is well-suited for measuring the drift of a visual odometry system, for example the drift per second.

For both metrics, we provide automated evaluation scripts that can be downloaded here. Note that there is also an online version available on our website. Both trajectories have to be stored in a text file (format: 'timestamp tx ty tz qx qy qz qw', more information). For comparison, we offer a set of trajectories from RGBD-SLAM.

Absolute Trajectory Error (ATE)

The absolute trajectory error directly measures the difference between points of the true and the estimated trajectory. As a pre-processing step, we associate the estimated poses with ground truth poses using the timestamps. Based on this association, we align the true and the estimated trajectory using singular value decomposition. Finally, we compute the difference between each pair of poses, and output the mean/median/standard deviation of these differences. Optionally, the script can plot both trajectories to a png or pdf file.

sh: 1: /usr/www/html-data/sturmju/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/evaluate_ate.py: not found

Relative Pose Error (RPE)

For computing the relative pose error, we provide a script ''evaluate_rpe.py''. This script computes the error in the relative motion between pairs of timestamps. By default, the script computes the error between all pairs of timestamps in the estimated trajectory file. As the number of timestamp pairs in the estimated trajectory is quadratic in the length of the trajectory, it can make sense to downsample this set to a fixed number (–max_pairs). Alternatively, one can choose to use a fixed window size (–fixed_delta). In this case, each pose in the estimated trajectory is associated with a later pose according to the window size (–delta) and unit (–delta_unit). This evaluation technique is useful for estimating the drift.

sh: 1: /usr/www/html-data/sturmju/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/evaluate_rpe.py: not found

Generating a point cloud from images

The depth images are already registered to the color images, so the pixels in the depth image already correspond one-to-one to the pixels in the color image. Therefore, generating colored point clouds is straight-forward. An example script is available in ''generate_pointcloud.py'', that takes a color image and a depth map as input, and generates a point cloud file in the PLY format. This format can be read by many 3D modelling programs, for example meshlab. You can download meshlab for Windows, Mac and Linux.

sh: 1: /usr/www/html-data/sturmju/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/generate_pointcloud.py: not found

Adding point clouds to ROS bag files

On the download page, we already provide ROS bag files with added point clouds for the datasets for visual inspection in RVIZ. Because of the large size of the resulting files, we downsampled these bag files to 2 Hz. In case that you want to generate ROS bag files that contain the point clouds for all images (at 30 Hz), you can use the ''add_pointclouds_to_bagfile.py'' script.

sh: 1: /usr/www/html-data/sturmju/rgbd_benchmark/rgbd_benchmark_tools/src/rgbd_benchmark_tools/add_pointclouds_to_bagfile.py: not found

Visualizing the datasets in RVIZ

RVIZ is the standard visualization tool in ROS. It can be easily adapted to display many different messages. In particular, it can be used to display the point clouds from a ROS bag file. For this, run (in three different consoles)

roscore
rosrun rviz rviz
rosbag play rgbd_dataset_freiburg1_xyz-2hz-with-pointclouds.bag

If this is the first launch, you will have to enable the built-in displays (Menu –> Plugins –> Check "Loaded" for the builtin plugins). In the displays tab, set the "fixed frame" to "/world". Click on "Add", and select the PointCloud2 display, and set topic to "/camera/rgb/points". To show the colors, change "color transformer" to "RGB8" in the point cloud display and the "style" to "points". If you want, you can set the decay time to a suitable value, for example 5 seconds, to accumulate the points in the viewer as they come in. The result should then look as follows:

Rechte Seite

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
CVG Group DVL Group

News

10.12.2020

Frank Dellaert (Georgia Tech) will give a talk in the TUM AI lecture series on Dec 17th, 4pm! Livestream

15.10.2020

Jon Barron (Google) will give a talk in the TUM AI lecture series on Oct 22nd, 9pm! Livestream

02.10.2020

We have five papers accepted to 3DV 2020!

30.09.2020

Our effcient deep network architectures form the AI engine of the project Slow Down COVID-19 at Harvard.

24.07.2020

Our practical course "Vision-based Navigation" (WS18, SS19) by Dr. Vladyslav Usenko and Nikolaus Demmel was honored as best practical course in the academic year 2018/2019 by the department for Informatics.

More