Direkt zum Inhalt springen
Computer Vision Group
Faculty of Informatics
Technical University of Munich

Technical University of Munich

Menu
Home Research Deep Learning

Deep Learning

Contact: Dr. Laura Leal-Taixe, Caner Hazırbaş, Philip Häusser, Vladimir Golkov, Lingni Ma

Deep Learning is a powerful machine learning tool that showed outstanding performance in many fields. One of the greatest successes of Deep Learning has been achieved in large scale object recognition with Convolutional Neural Networks (CNNs). CNNs' main power comes from learning data representations directly from data in a hierarchical layer based structure.

We apply Convolutional Neural Networks in order to solve computer vision tasks such as optical flow, scene understanding, and develop state-of-the-art methods.

Learning by Association

A child is able to learn new concepts quickly and without the need for millions examples that are pointed out individually. Once a child has seen one dog, she or he will be able to recognize other dogs and becomes better at recognition with subsequent exposure to more variety. In terms of training computers to perform similar tasks, deep neural networks have demonstrated superior performance among machine learning models.

However, these networks have been trained dramatically differently than a learning child, requiring labels for every training example, following a purely supervised training scheme. Neural networks are defined by huge amounts of parameters to be optimized. Therefore, a plethora of labeled training data is required, which might be costly and time consuming to obtain. It is desirable to train machine learning models without labels (unsupervisedly) or with only some fraction of the data labeled (semi-supervisedly).

We propose a novel training method that follows an intuitive approach: learning by association. We feed a batch of labeled and a batch of unlabeled data through a network, producing embeddings for both batches. Then, an imaginary walker is sent from samples in the labeled batch to samples in the unlabeled batch. The transition follows a probability distribution obtained from the similarity of the respective embeddings which we refer to as an association.

In this line of work, we have published papers on semi-supervised training, domain adaptation, multimodal training (text and images) and unsupervised training / clustering. More information can be found here.

Deep Depth From Focus

DDFF aims at predicting a depth map from a given focal stack in which the focus of the camera gradually changes. DDFFNet is an end-to-end trained Convolutional Neural Network, designed to solve the highly ill-posed depth from focus task. Please visit the DDFF Project Page for details.

Flownet

In our recent ICCV'15 paper, we presented two CNN architectures to estimate the optical flow given one image pair. We train the network end-to-end on a GPU. Our system works as good as state-of-the-art techniques.

Journal Articles
2016
q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans (V. Golkov, A. Dosovitskiy, J. I. Sperl, M. I. Menzel, M. Czisch, P. Sämann, T. Brox, D. Cremers), In IEEE Transactions on Medical Imaging, volume 35, 2016. [bib] [pdf]Special Issue on Deep Learning
Conference and Workshop Papers
2017
Associative Deep Clustering - Training a Classification Network with no Labels (P. Haeusser, J. Plapp, V. Golkov, E. Aljalbout, D. Cremers), In submitted to CVPR2018, 2017. [bib]
Regularization for Deep Learning: A Taxonomy (J. Kukačka, V. Golkov, D. Cremers), In ArXiv preprint, 2017.(arXiv:1710.10686) [bib] [pdf]
Associative Domain Adaptation (P. Haeusser, T. Frerix, A. Mordvintsev, D. Cremers), In IEEE International Conference on Computer Vision (ICCV), 2017.([code]) [bib] [pdf]
Better Text Understanding Through Image-To-Text Transfer (K. Kurach, S. Gelly, M. Jastrzebski, P. Haeusser, O. Teytaud, D. Vincent, O. Bousquet), In arxiv:1705.08386, 2017. [bib] [pdf]
One-Shot Video Object Segmentation (S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixé, D. Cremers, L. Van Gool), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [bib] [pdf]
Deep Depth From Focus (Caner Hazirbas, Laura Leal-Taixé, Daniel Cremers), In ArXiv preprint arXiv:1704.01085, 2017.([arxiv], [dataset]) [bib]
Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems (Tim Meinhardt, Michael Moeller, Caner Hazirbas, Daniel Cremers), In IEEE International Conference on Computer Vision (ICCV), 2017.([arxiv]) [bib]
Learning by Association - A versatile semi-supervised training method for neural networks (P. Haeusser, A. Mordvintsev, D. Cremers), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.([code]) [bib] [pdf]
3D Deep Learning for Biological Function Prediction from Physical Fields (V. Golkov, M. J. Skwark, A. Mirchev, G. Dikov, A. R. Geanes, J. Mendenhall, J. Meiler, D. Cremers), In ArXiv preprint, 2017.(arXiv:1704.04039) [bib] [pdf]
Establishment of an interdisciplinary workflow of machine learning-based Radiomics in sarcoma patients (J.C. Peeken, C. Knie, V. Golkov, K. Kessel, F. Pasa, Q. Khan, M. Seroglazov, J. Kukačka, T. Goldberg, L. Richter, J. Reeb, B. Rost, F. Pfeiffer, D. Cremers, F. Nüsslin, S.E. Combs), In 23. Jahrestagung der Deutschen Gesellschaft für Radioonkologie (DEGRO), 2017. [bib]
Image-based localization using LSTMs for structured feature correlation (Florian Walch, Caner Hazirbas, Laura Leal-Taixé, Torsten Sattler, Sebastian Hilsenbeck, Daniel Cremers), In IEEE International Conference on Computer Vision (ICCV), 2017.([arxiv]) [bib]
2016
FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-based CNN Architecture (C. Hazirbas, L. Ma, C. Domokos, D. Cremers), In Asian Conference on Computer Vision, 2016.([code]) [bib] [pdf]
Protein Contact Prediction from Amino Acid Co-Evolution Using Convolutional Networks for Graph-Valued Images (V. Golkov, M. J. Skwark, A. Golkov, A. Dosovitskiy, T. Brox, J. Meiler, D. Cremers), In Annual Conference on Neural Information Processing Systems (NIPS), 2016.([video]) [bib] [pdf]Oral Presentation (acceptance rate: under 2%)
A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation (N.Mayer, E.Ilg, P.Haeusser, P.Fischer, D.Cremers, A.Dosovitskiy, T.Brox), In IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016.(arXiv:1512.02134) [bib] [pdf]
2015
CAPTCHA Recognition with Active Deep Learning (F. Stark, C. Hazirbas, R. Triebel, D. Cremers), In GCPR Workshop on New Challenges in Neural Computation, 2015.([code]) [bib] [pdf]
FlowNet: Learning Optical Flow with Convolutional Networks (A. Dosovitskiy, P. Fischer, E. Ilg, P. Haeusser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, T. Brox), In IEEE International Conference on Computer Vision (ICCV), 2015.([video],[code]) [bib] [pdf] [doi]
q-Space Deep Learning for Twelve-Fold Shorter and Model-Free Diffusion MRI Scans (V. Golkov, A. Dosovitskiy, P. Sämann, J. I. Sperl, T. Sprenger, M. Czisch, M. I. Menzel, P. A. Gómez, A. Haase, T. Brox, D. Cremers), In Medical Image Computing and Computer Assisted Intervention (MICCAI), 2015. [bib] [pdf]
Powered by bibtexbrowser
Export as PDF or BIB

Rechte Seite

Informatik IX
Chair for Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching

info@vision.in.tum.de