Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


Marker-less Motion Capture

In this project, we develop statistical and energy minimization methods to tracking articulated 3D objects from multiple camera views. Such techniques are of central importance in particular for markerless motion capture. The human motion sequence extracted from multiple videos can subsequently be used to animate virtual characters as commonly done in action movies.

Four Input Sequences Tracked 3D Model Superimposed

Related publications


Export as PDF, XML, TEX or BIB

Book Chapters
2007
[]Contours, optic flow, and prior knowledge: cues for capturing 3D human motion in videos (T. Brox, B. Rosenhahn and D. Cremers), Chapter in Human Motion - Understanding, Modeling, Capture, and Animation, Springer, 2007.  [bibtex] [pdf]
Journal Articles
2015
[]Fast Visual Odometry for 3-D Range Sensors (M. Jaimez and J. Gonzalez-Jimenez), In IEEE Transactions on Robotics, volume 31, 2015. ([video]) [bibtex] [pdf]
2009
[]Combined region- and motion-based 3D tracking of rigid and articulated objects (T. Brox, B. Rosenhahn, J. Gall and D. Cremers), In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 32, 2009.  [bibtex] [pdf]
2007
[]Three-dimensional shape knowledge for joint image segmentation and pose tracking (B. Rosenhahn, T. Brox and J. Weickert), In International Journal of Computer Vision, volume 73, 2007. (available online) [bibtex]
Preprints
2021
[]Event-Based Feature Tracking in Continuous Time with Sliding Window Optimization (J. Chui, S. Klenk and D. Cremers), In arXiv preprint, 2021.  [bibtex] [arXiv:2107.04536] [pdf]
Conference and Workshop Papers
2022
[]DirectTracker: 3D Multi-Object Tracking Using Direct Image Alignment and Photometric Bundle Adjustment (M Gladkova, N Korobov, N Demmel, A Ošep, L Leal-Taixé and D Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2022. ([project page]) [bibtex] [arXiv:2209.14965]
2017
[]An Efficient Background Term for 3D Reconstruction and Tracking with Smooth Subdivision Surface Models (M. Jaimez, T. J. Cashman, A. Fitzgibbon, J. Gonzalez-Jimenez and D. Cremers), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. ([video]) [bibtex] [pdf]
2015
[]Model-Based Tracking at 300Hz using Raw Time-of-Flight Observations (J. Stühmer, S. Nowozin, A. Fitzgibbon, R. Szeliski, T. Perry, S. Acharya, D. Cremers and J. Shotton), In IEEE International Conference on Computer Vision (ICCV), 2015. ([video]) [bibtex] [pdf]
2008
[]Modeling and Tracking Line-Constrained Mechanical Systems (B. Rosenhahn, T. Brox, D. Cremers and H.-P. Seidel), In 2nd Workshop on Robot Vision (G. Sommer, R. Klette, eds.), volume 4931, 2008.  [bibtex] [pdf]
[]Markerless Motion Capture of Man-Machine Interaction (B. Rosenhahn, C. Schmaltz, T. Brox, J. Weickert, D. Cremers and H.-P. Seidel), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.  [bibtex] [pdf]
2007
[]Scaled motion dynamics for markerless motion capture (B. Rosenhahn, T. Brox and H.-P. Seidel), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2007.  [bibtex] [pdf]
[]Online smoothing for markerless motion capture (B. Rosenhahn, T. Brox, D. Cremers and H.-P. Seidel), In Pattern Recognition (Proc. DAGM), Springer, 2007.  [bibtex] [pdf]
[]Nonparametric density estimation with adaptive anisotropic kernels for human motion tracking (T. Brox, B. Rosenhahn, D. Cremers and H.-P. Seidel), In Proc. 2nd International Workshop on Human Motion (A. Elgammal, B. Rosenhahn, R. Klette, eds.), Springer, volume 4814, 2007.  [bibtex] [pdf]
2006
[]High accuracy optical flow serves 3-D pose tracking: exploiting contour and flow based constraints (T. Brox, B. Rosenhahn, D. Cremers and H.-P. Seidel), In European Conference on Computer Vision (ECCV) (A. Leonardis, H. Bischof, A. Pinz, eds.), Springer, volume 3952, 2006.  [bibtex] [pdf]
[]Nonparametric density estimation for human pose tracking (T. Brox, B. Rosenhahn, U. Kersting and D. Cremers), In Pattern Recognition (Proc. DAGM) (K. Fet al., ed.), Springer, volume 4174, 2006.  [bibtex] [pdf]
inproceedings
2007
[]Region-based Pose Tracking (C. Schmaltz, B. Rosenhahn, T. Brox, D. Cremers, J. Weickert, L. Wietzke and G. Sommer), In Proc. 3rd Iberian Conference on Pattern Recognition and Image Analysis, Springer, 2007.  [bibtex] [pdf]
[]Occlusion Modeling by Tracking Multiple Objects (C. Schmaltz, B. Rosenhahn, T. Brox, D. Cremers, J. Weickert, L. Wietzke and G. Sommer), In Pattern Recognition (Proc. DAGM), Springer, 2007.  [bibtex] [pdf]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More