Direkt zum Inhalt springen
Computer Vision Group
TUM Department of Informatics
Technical University of Munich

Technical University of Munich



Rolling-Shutter Visual-Inertial Odometry Dataset

Contact : David Schubert, Nikolaus Demmel, Lukas von Stumberg, Vladyslav Usenko.

We present a novel dataset that contains time-synchronized global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences.

Export as PDF, XML, TEX or BIB

Conference and Workshop Papers
[]Rolling-Shutter Modelling for Visual-Inertial Odometry (D. Schubert, N. Demmel, L. von Stumberg, V. Usenko and D. Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2019. ([arxiv]) [bibtex] [pdf]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB



The figure shows approximate sensor orientations in xyz-rgb convention.

Note: this is an updated figure compared to the schematic illustration in the paper, which might have been confusing. Also, in the calibrated dataset, the offset between IMU and Marker reference frame has already been taken care of: the ground truth poses are post-processed to track the IMU frame.

For the calibrated sequences that are provided in the table the ground-truth poses are provided for the IMU coordinate frame and time-synchronized with image and IMU data. Geometric camera-IMU calibration can be found here: calibration.yaml.

Calibration was done using the following sequences.

Camera calibration dataset-calib-cam1.bag dataset-calib-cam1.tar
IMU calibration dataset-calib-imu1.bag dataset-calib-imu1.tar

Note that for the calibration sequences, both cameras were operating in global-shutter mode. This means the timestamps for the rolling-shutter images refer to the first row. In general, timestamps denote the middle of the exposure interval.

For more information about calibration, we refer to our visual-inertial dataset.

According to the camera manufacturer, the time difference of two consecutive rows due to rolling shutter can't be read directly, but is very well approximated by the step size of the exposure time. Like this, we obtain an approximate row time difference of 29.4737 microseconds.

Rechte Seite

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
CVG Group DVL Group



In April 2022 Jürgen Sturm, Christian Kerl and Daniel Cremers were featured among the top 10 most influential scholars in robotics of the last decade.


We have open PhD and postdoc positions! To apply, please use our application form.


We have six papers accepted to CVPR 2022 in New Orleans!


We have two papers accepted to ICRA 2022 - congrats to Lukas von Stumberg, Qing Cheng and Niclas Zeller!