TechTalks from event: ICML 2012 Invited Talks

  • Learning Hierarchies of Invariant Features Authors: Yann LeCun, Courant Institute of Mathematical Sciences and Center for Neural Science, NYU

    Intelligent perceptual tasks such as vision and audition require the construction of good internal representations. Machine Learning has been very successful for producing classifiers, but the next big challenge for ML, computer vision, and computational neuroscience is to devise learning algorithms that can learn features and internal representations automatically.

    Theoretical and empirical evidence suggest that the perceptual world is best represented by a multi-stage hierarchy in which features in successive stages are increasingly global, invariant, and abstract. An important question is to devise “deep learning” methods for multi-stage architecture than can automatically learn invariant feature hierarchies from labeled and unlabeled data.

    A number of unsupervised methods for learning invariant features will be described that are based on sparse coding and sparse auto-encoders: convolutional sparse auto-encoders, invariance through group sparsity, invariance through lateral inhibition, and invariance through temporal constancy. The methods are used to pre-train convolutional networks (ConvNets). ConvNets are biologically-inspired architectures consisting of multiple stages of filter banks, interspersed with non-linear operations, spatial pooling, and contrast normalization operations.

    Several applications will be shown through videos and live demos, including a a pedestrian detector, a category-level object recognition system that can be trained on the fly, and a system that can label every pixel in an image with the category of the object it belongs to (scene parsing). Specialized hardware architecture that run these systems in real time will also be described.

  • Modern Algorithmic Tools for Analyzing Data Streams Authors: Sethu Muthukrishnan, Professor of Computer Science, Rutgers University

    We now have a second generation of algorithmic tools for analyzing data streams, that go beyond the initial tools for summarizing a single stream in small space. The new tools deal with distributed data, stochastic models, dynamic graph and matrix objects and others; they optimize communication, number of parallel rounds and privacy among other things. I will provide an overview of these tools and explore their application to Machine Learning problems.

  • Information Theory and Sustainable Energy Authors: David MacKay, FRS Chief Scientific Advisor, DECC Professor of Natural Philosophy, University of Cambridge

    How easy is it to get off our fossil fuel habit? Can European countries live on their own renewables? What do the fundamental limits of physics say? How does our current energy consumption compare with our sustainable energy options? This talk will offer a straight-talking assessment of the numbers; will discuss how to make energy plans that add up; and will hunt for connections between machine learning, climate change, and sustainable energy.

  • Machine Learning that Matters Authors: Kiri Wagstaff
    Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society. From this perspective, there exist glaring limitations in the data sets we investigate, the metrics we employ for evaluation, and the degree to which results are communicated back to their originating domains. What changes are needed to how we conduct research to increase the impact that ML has? We present six Impact Challenges to explicitly focus the field’s energy and attention, and we discuss existing obstacles that must be addressed. We aim to inspire ongoing discussion and focus on ML that matters.