Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.

Description

Learning gave a considerable and surprising boost to computer vision, and deep neural networks appear to be the new winners of the fierce race on classification errors. Algorithm refinements are now going well beyond our understanding of the problem, and seem to make irrelevant any study of computer vision models. Yet, learning from high-dimensional data such as images, suffers from a curse of dimensionality which predicts a combinatorial explosion. Why are these neural architectures avoiding this curse? Is this rooted in properties of images and visual tasks? Can these properties be related to high-dimensional problems in other fields? We shall explore the mathematical roots of these questions, and tell a story where invariants, contractions, sparsity, dimension reduction and multiscale analysis play important roles. Images and examples will give a colorful background to the talk.

Questions and Answers

You need to be logged in to be able to post here.