Direkt zum Inhalt springen
Computer Vision Group
TUM Department of Informatics
Technical University of Munich

Technical University of Munich



Practical Course: Vision-based Navigation (6h SWS / 10 ECTS)

WS 2016/17, TU München


Vladyslav Usenko, Georg Kuschk, Prof. Dr. Daniel Cremers

Please direct questions to visnav16@vision.in.tum.de

Date & Location

Lecture & exercises (assignment phase) : Tuesdays, lectures approx. 2pm to 4pm in seminar room 02.09.023, tutoring of exercises approx. 4pm to 6pm in lab 02.05.014
Tutored lab time (project phase) : Tuesdays from 2pm to 6pm in lab 02.05.014 (other times for free project work available, tbd)

The course will start on 25.10.16 Tuesday at 14:00 in room 02.09.023.

Course Structure

The course will take place in our seminar room 02.09.023 and in our lab. In the beginning phase (4-5 weeks), there will be introductory lectures in our seminar room. Programming assignment sheets on basic problems will be handed out every week. In a second phase, the students will work in teams of 2-3 students on a practical problem (project). For the rest of the semester, the group meets weekly with their tutors and presents and discusses their progress. At the end of the course, the teams will present their project in a talk and demonstrate their solutions. They will document their project work in a written report. Both the assignments and the project part will be graded, and a final grade will be obtained from that.

For more details see Course Layout below.

Course Registration


  • Good knowledge of the C/C++ language and basic mathematics such as linear algebra, analysis, and numerics is required
  • Prior practical knowledge in CUDA programming, robotics, and computer vision topics is a plus
  • Participation in at least one of the following lectures of the TUM Computer Vision Group: Variational Methods for Computer Vision, Multiple View Geometry, Autonomous Navigation for Flying Robots. Similar lectures can also be accepted, please contact us.

Number of participants: max. 12

Course Description

Vision-based localization, mapping, and navigation has recently seen tremendous progress in computer vision and robotics research. Such methods already have a strong impact on applications in fields such as robotics and augmented reality.

In this course, students will develop and implement algorithms for visual navigation. For example, vision-based autonomous navigation for platforms such as wheeled robots and quadrocopters, or vision-based localization and mapping with handheld devices will be tackled. This includes, e.g., simultaneous localization and mapping with monocular, stereo, or RGB-D cameras, (semi-)dense 3D reconstruction, obstacle perception and avoidance, or autonomous path planning and execution.

Course Layout

  • Lecture & Exercise (tba): 2 hours per week lecture session, Tuesdays from 2pm to 4pm. 2 hours per week tutored exercises, Tuesdays from 4pm to 6pm. There are 4-5 lecture & exercise sessions. Each week, the exercise for the following week will be announced and the exercise of the current week will be presented to tutors. The exercises must be done in groups of 2–3 students. The groups should be formed on the first lecture day. Students can use our lab computers in room 02.05.014. Attendance is mandatory.
  • Project (tba): Each group will be assigned to a project. Students can work in the lab and consult the tutors on Tuesdays from 2pm to 6pm. Additional lab time for working freely can be arranged.
  • Presentation and demo (tba): Each group will be assigned a time slot on one of the last days of the semester, to present their results and give a live demo, followed by a Q&A session.
  • Project Report: Each group writes a report on their project work (10-12 pages, single column, single-spaced lines, 11pt font size).


Lecture notes:

Selected publications:

  • LSD-SLAM: Large-Scale Direct Monocular SLAM (J. Engel, T. Schöps, D. Cremers), In European Conference on Computer Vision (ECCV), 2014.
  • Semi-Dense Visual Odometry for AR on a Smartphone (T. Schöps, J. Engel, D. Cremers), In International Symposium on Mixed and Augmented Reality, 2014.
  • Visual-Inertial Navigation for a Camera-Equipped 25g Nano-Quadrotor (O. Dunkley, J. Engel, J. Sturm, D. Cremers), In IROS2014 Aerial Open Source Robotics Workshop, 2014.
  • Collision Avoidance for Quadrotors with a Monocular Camera (H. Alvarez, L.M. Paz, J. Sturm, D. Cremers), In Proc. of The 12th International Symposium on Experimental Robotics (ISER), 2014.


Additional material can be downloaded from here.

Rechte Seite

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
CVG Group DVL Group



Bernt Schiele (Max Planck Institute for Informatics) will give a talk in the TUM AI lecture series on June 10th, 3pm! Livestream

French-German Machine Learning Symposium

French-German Machine Learning Symposium

The French-German Machine Learning Symposium aims to strengthen interactions and inspire collaborations between both countries. We invited some of the leading ML researchers from France and Germany to this two-day symposium to give a glimpse into their research, and engage in discussions on the future of machine learning and how to strengthen research collaborations in ML between France and Germany.

The list of speakers includes Yann LeCun, Cordelia Schmid, Jean-Bernard Lasserre, Bernhard Schölkopf, and many more! For the full program please visit the webpage.


Ron Kimmel (Technion - Israel Institute of Technology) will give a talk in the TUM AI lecture series on May 6th, 3pm! Livestream


4Seasons Dataset: We have released a novel dataset for benchmarking multi-weather SLAM in autonomous driving.


Hao Li (Pinscreen) will give a talk in the TUM AI lecture series on April 22nd, 8pm! Livestream