TechTalks from event: CVPR 2014 Oral Talks

Special 3 : Plenary Session

  • Vote Results Authors:
    Learning gave a considerable and surprising boost to computer vision, and deep neural networks appear to be the new winners of the fierce race on classification errors. Algorithm refinements are now going well beyond our understanding of the problem, and seem to make irrelevant any study of computer vision models. Yet, learning from high-dimensional data such as images, suffers from a curse of dimensionality which predicts a combinatorial explosion. Why are these neural architectures avoiding this curse? Is this rooted in properties of images and visual tasks? Can these properties be related to high-dimensional problems in other fields? We shall explore the mathematical roots of these questions, and tell a story where invariants, contractions, sparsity, dimension reduction and multiscale analysis play important roles. Images and examples will give a colorful background to the talk.
  • Plenary Talk: Are Deep Networks a Solution to Curse of Dimensionality? Authors: Stéphane Mallat (École Normale Supérieure)
    Learning gave a considerable and surprising boost to computer vision, and deep neural networks appear to be the new winners of the fierce race on classification errors. Algorithm refinements are now going well beyond our understanding of the problem, and seem to make irrelevant any study of computer vision models. Yet, learning from high-dimensional data such as images, suffers from a curse of dimensionality which predicts a combinatorial explosion. Why are these neural architectures avoiding this curse? Is this rooted in properties of images and visual tasks? Can these properties be related to high-dimensional problems in other fields? We shall explore the mathematical roots of these questions, and tell a story where invariants, contractions, sparsity, dimension reduction and multiscale analysis play important roles. Images and examples will give a colorful background to the talk.