Practical Course: Vision-based Navigation IN2106 (6h SWS / 10 ECTS)
SS 2018, TU München
Lecturers: Dr. Xiang Gao, Vladyslav Usenko
Tutors: Dr. Xiang Gao, Vladyslav Usenko
Please direct questions to email@example.com
First Lecture: April, 16th, 14.00 in seminar room 02.09.023
IMPORTANT: Registration through the TUM Matching system has to be done between 9.-14. of February 2018. Details on the schedule can be found here.
TUMOnline course entry: https://campus.tum.de/tumonline/wbLv.wbShowLVDetail?pStpSpNr=950346021&pSpracheNr=1
Date & Location
Lecture & exercises (assignment phase) : Mondays, lectures approx. 2pm to 4pm in seminar room 02.09.023, tutoring of exercises approx. 4pm to 6pm in lab 02.05.014
Tutored lab time (project phase) : Mondays from 2pm to 6pm in lab 02.05.014 (other times for free project work available, tbd)
The course will take place in our seminar room 02.09.023 and in lab room 02.05.014. In the beginning phase (4-5 weeks), there will be introductory lectures in seminar room 02.09.023. Programming assignment sheets on basic problems will be handed out every week. In a second phase, the students will work in teams of 2-3 students on a practical problem (project). For the rest of the semester, the group meets weekly with their tutors and presents and discusses their progress. At the end of the course, the teams will present their project in a talk and demonstrate their solutions. They will document their project work in a written report. Both the assignments and the project part will be graded, and a final grade will be obtained from that.
For more details see Course Layout below.
Places assigned through TUM matching system.
- Good knowledge of the C/C++ language and basic mathematics such as linear algebra, analysis, and numerics is required
- Participation in at least one of the following lectures of the TUM Computer Vision Group: Variational Methods for Computer Vision, Multiple View Geometry, Autonomous Navigation for Flying Robots. Similar lectures can also be accepted, please contact us.
Number of participants: max. 12
Vision-based localization, mapping, and navigation has recently seen tremendous progress in computer vision and robotics research. Such methods already have a strong impact on applications in fields such as robotics and augmented reality.
In this course, students will develop and implement algorithms for visual navigation. For example, vision-based autonomous navigation for platforms such as wheeled robots and quadrocopters, or vision-based localization and mapping with handheld devices will be tackled. This includes, e.g., simultaneous localization and mapping with monocular, stereo, or RGB-D cameras, (semi-)dense 3D reconstruction, obstacle perception and avoidance, or autonomous path planning and execution.
- Lecture & Exercise : 2 hours per week lecture session, Mondays from 2pm to 4pm. 2 hours per week tutored exercises, Mondays from 4pm to 6pm. There are 4-5 lecture & exercise sessions. Each week, the exercise for the following week will be announced and the exercise of the current week will be presented to tutors. The exercises must be done in groups of 2–3 students. The groups should be formed on the first lecture day. Students can use our lab computers in room 02.05.014. Attendance is mandatory.
- Project : Each group will be assigned to a project. Students can work in the lab and consult the tutors on Mondays from 2pm to 6pm. Attendance to meetings with tutors is mandatory. Additional lab time for working freely can be arranged.
- Presentation and demo : Each group will be assigned a time slot on one of the last days of the semester, to present their results and give a live demo, followed by a Q&A session. The presentation shall be 20 minutes long + 10 minutes questions. The presentation shall last up to 15 minutes.
- Project Report : Each group writes a report on their project work (10-12 pages, single column, single-spaced lines, 11pt font size; title page, table of content and references will not be accounted for in the page numbers). A standard template will be provided.
- Computer Vision II: Multiple View Geometry, http://vision.in.tum.de/teaching/ss2016/mvg2016
- Autonomous Navigation for Flying Robots (EdX course), http://vision.in.tum.de/teaching/ss2015/autonavx
- Computer Vision I: Variational Methods, http://vision.in.tum.de/teaching/ws2016/vmcv2016
- Direct Sparse Odometry (J. Engel, V. Koltun, D. Cremers), In Trans. on Pattern Analysis and Machine Intelligence, 2017.
- Semi-Dense Visual Odometry for AR on a Smartphone (T. Schöps, J. Engel, D. Cremers), In International Symposium on Mixed and Augmented Reality, 2014.
- From Monocular SLAM to Autonomous Drone Exploration, In European Conference on Mobile Robots (ECMR), 2017.
- Collision Avoidance for Quadrotors with a Monocular Camera (H. Alvarez, L.M. Paz, J. Sturm, D. Cremers), In Proc. of The 12th International Symposium on Experimental Robotics (ISER), 2014.
Additional material can be downloaded from here.