You can sort by using the menu on the right.

  • Building Meaningful Customer Experiences

    A one-day workshop on "Building Meaningful Customer Experiences" by the design expert, Nathan Shedroff, author of multiple books including Making Meaning, Experience Design 1.1, Design is the Problem, Experience Design 1 Cards, and, Dictionary of Sustainable Management.

  • How to start a startup as a non-technical founder

    Tech Startup from the Ground Up: Advice from a non-Technical Founder. This workshop will cover the following topics:

    • Day Job to Dream Job: Quit and Commit
    • Execute! Learn from the Honey Badger
    • Structuring & Hiring: Startup Oxygen
    • Fundraising: Kickstart, Crowd-source, Accelerate, and Call in the Angels
    • Growing Your Business: If Plan A Fails, there are 25 more Letters
    • Recommended Resources, Questions from Viewers

  • Green Initiatives Conference and Expo 2012

    3rd Annual Green Initiatives Conference Sustainability Strategies that are GOOD for the ENVIRONMENT & GREAT for your BOTTOM-LINE

  • FailCon 2012

    Watch the webcast for free. FailCon is a one-day conference for technology entrepreneurs, investors, developers and designers to study their own and others' failures and prepare for success.

  • Other ICML 2012 Tutorials

    Probabilistic Topic Models; Representation Learning; Mirror Descent and Saddle Point First Order Algorithms

  • Performance Evaluation for Learning Algorithms: Techniques, Application and Issues

    The purpose of the tutorial is to promote an appreciation of the need for rigorous and objective evaluation and an understanding of the available alternatives along with their assumptions, constraints and context of application. Machine learning researchers and practitioners alike will all benefit from the contents of the tutorial, which discusses the need for sound evaluation strategies, practical approaches and tools for evaluation, going well beyond those described in existing machine learning and data mining textbooks, so far.

  • Spectral Approaches to Learning Latent Variable Models

    Examples of popular latent variable models include latent tree graphical models and dynamical system models, both of which occupy a fundamental place in engineering, control theory, economics as well as the physical, biological, and social sciences. Unfortunately, to discover the right latent state representation and model parameters, we must solve difficult structural and temporal credit assignment problems. Work on learning latent variable structure has predominantly relied on likelihood maximization and local search heuristics such as expectation maximization (EM); these heuristics often lead to a search space with a host of bad local optima, and may therefore require impractically many restarts to reach a prescribed training precision.

    This tutorial will focus on a recently-discovered class of spectral learning algorithms. These algorithms hold the promise of overcoming these problems and enabling learning of latent structure in tree and dynamical system models. Unlike the EM algorithm, spectral methods are computationally efficient, statistically consistent, and have no local optima; in addition, they can be simple to implement, and have state-of-the-art practical performance for many interesting learning problems.

    We will describe the main theoretical, algorithmic, and empirical results related to spectral learning algorithms, starting with an overview of linear system identification results obtained in the last two decades, and then focusing on the remarkable recent progress in learning nonlinear dynamical systems, latent tree graphical models, and kernel-based nonparametric models.

  • PAC-Bayesian Analysis in Supervised, Unsupervised, and Reinforcement Learning

    PAC-Bayesian analysis is a basic and very general tool for data-dependent analysis in machine learning. By now, it has been applied in such diverse areas as supervised learning, unsupervised learning, and reinforcement learning, leading to state-of-the-art algorithms and accompanying generalization bounds. PAC-Bayesian analysis, in a sense, takes the best out of Bayesian methods and PAC learning and puts it together: (1) it provides an easy way to exploit prior knowledge (like Bayesian methods); (2) it provides strict and explicit generalization guarantees (like VC theory); and (3) it is data-dependent and provides an easy and strict way of exploiting benign conditions (like Rademacher complexities). In addition, PAC-Bayesian bounds directly lead to efficient learning algorithms.

    We will start with a general introduction to PAC-Bayesian analysis, which should be accessible to an average student, who is familiar with machine learning at the basic level. Then, we will survey multiple forms of PAC-Bayesian bounds and their numerous applications in different fields (including supervised and unsupervised learning, finite and continuous domains, and the very recent extension to martingales and reinforcement learning). Some of these applications will be explained in more details, while others will be surveyed at a high level. We will also describe the relations and distinctions between PAC-Bayesian analysis, Bayesian learning, VC theory, and Rademacher complexities. We will discuss the role, value, and shortcomings of frequentist bounds that are inspired by Bayesian analysis.

  • ICML 2012 Tutorial on Prediction, Belief, and Markets

    Prediction markets are financial markets designed to aggregate opinions across large populations of traders. A typical prediction market offers a set of securities with payoffs determined by the future state of the world. For example, a market might offer a security worth $1 if Barack Obama is re-elected in 2012 and $0 otherwise. Roughly speaking, a trader who believes the probability of Obama's re-election is p should be willing to buy this security at any price less than $p and (short) sell this security at any price greater than $p. For this reason, the going price of this security could be interpreted as traders' collective belief about the likelihood of Obama's re-election. Prediction markets have been used to generate accurate forecasts in a variety of domains including politics, disease surveillance, business, and entertainment, and are cited in the media increasingly often.

     

    This tutorial will cover some of the basic mathematical ideas used in the design of prediction markets, and illustrate several fundamental connections between these ideas and techniques used in machine learning.  We will begin with an overview of proper scoring rules, which can be used to measure the accuracy of a single entity's prediction, and are closely related to proper loss functions. We will then discuss market scoring rules, automated market makers based on proper scoring rules which can be used to aggregate the predictions of many forecasters, and describe how market scoring rules can be implemented as inventory-based markets in which securities are bought and sold. We will describe recent research exploring a duality-based interpretation of market scoring rules which can be exploited to design new markets that can be run efficiently over very large state spaces. Finally, we will explore the fundamental mathematical connections between market scoring rules and two areas of machine learning: online "no-regret" learning and variational inference with exponential families.

     

    This tutorial will be self-contained. No background on markets or specific areas of machine learning is required.

  • Tutorial on Causal inference - conditional independences and beyond

    Machine learning has traditionally been focused on prediction. Given observations that have been generated by an unknown stochastic dependency, the goal is to infer a law that will be able to correctly predict future observations generated by the same dependency. In contrast to this goal, causal inference tries to infer the causal structure underlying the observed dependencies. More precisely, one tries to infer the behavior of a system under interventions without performing them, which does not fit into any traditional prediction scenario. Apart from the fact that it is still debated whether this is possible at all, it is a priori not clear, given that it is, why machine learning tools should be helpful for this task.

    Since the Eighties there has been a community of researchers, mostly from statistics, philosophy, and computer science who have developed methods aiming at inferring causal relationships from observational data. The pioneering work of Glymour, Scheines, Spirtes, and Pearl describes assumptions that link conditional statistical dependences to causality, which then renders many causal inference problems solvable. The typical task, which is solved by the corresponding algorithms, reads: given observations from the joint distribution on the variables X1,...,Xn with n ≥ 3, infer the causal directed acyclic graph (or parts of it).

    Recently, this work has been complemented by several researchers from machine learning who described methods that do not rely on conditional independences alone, but employ other properties of joint probability distributions. These methods use established tools of machine learning like, for instance, regression and reproducing kernel Hilbert spaces. In contrast to the above approaches, the causal direction can sometimes be inferred when only two variables are observed. Remarkably, this can be helpful for more traditional machine learning tasks like prediction under changing background conditions, because the task has different solutions depending on whether the predicted variable is the cause or the effect.

    Outline

    - Introductory remarks: causal dependences versus statistical dependences
    - Independence based causal inference: assumptions/algorithms/limitations
    - New inference principles via restricting model classes
    - Foundation of new inference rules by algorithmic information theory
    - How machine learning can benefit from causal inference