Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.


The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data.  Although domain knowledge can be used to help design representations, learning can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms. We view the ultimate goal of these algorithms as disentangling the unknown underlying factors of variation that explain the observed data.This tutorial reviews the basics of feature learning and deep learning, as well as recent work relating these subjects to probabilistic modeling and manifold learning.  An objective is to raise questions and issues about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.

  1. Motivations and Scope
    • Feature / Representation learning
    • Distributed representations
    • Exploiting unlabeled data
    • Deep representations
    • Multi-task / Transfer learning
    • Invariance vs Disentangling
  2. Algorithms
    • Probabilistic models and RBM variants
    • Auto-encoder variants (sparse, denoising, contractive)
    • Explaining away, sparse coding and Predictive Sparse Decomposition
    • Deep variants
  3. Analysis, Issues and Practice
    • Tips and tricks
    • Partition function gradient
    • Inference
    • Mixing between modes
    • Geometry and probabilistic interpretations of auto-encoders
    • Open questions

Questions and Answers

You need to be logged in to be able to post here.