TechTalks from event: ICML 2011

Tutorial: Machine Learning and Robotics

  • Machine Learning and Robotics: Part 1 Authors: Marc Toussaint, FU Berlin
    Joint research on Machine Learning and Robotics has received increasingly more attention recently. There are two reasons for this trend: First, robots that cannot learn lack one of the most interesting aspects of intelligence. Much of classical robotics focussed on reasoning, optimal control and sensor processing given models of the robot and its environment. While this approach is successful for many industrial applications, it falls behind the more ambitious goal of Robotics as a test platform for our understanding of artificial and natural intelligence. Learning therefore has become a central topic in modern Robotics research. Second, Machine Learning has proven very successful on many applications of statistical data analysis, like speech, vision, text, genetics, etc. However, although Machine Learning methods largely outperform humans in extracting statistical models from abstract data sets, our understanding of learning in natural environments---and learning what is relevant for behavior in natural environments---is limited. Therefore, robotics research motivates new and interesting kinds of challenges for Machine Learning. This tutorial targets at Machine Learning researchers interested in the challenges of Robotics. It will introduce---in ML lingo---basics of Robotics and discuss which kinds of ML research are particularly promising to advance the field of Learning in Robotics.
  • Machine Learning and Robotics: Part 2 Authors: Marc Toussaint, FU Berlin
    Continuation of the first half of the tutorial.

Test-of-Time

Best Paper

  • Computational Rationalization: The Inverse Equilibrium Problem Authors: Kevin Waugh; Brian Ziebart; Drew Bagnell
    Modeling the purposeful behavior of imperfect agents from a small number of observations is a challenging task. When restricted to the single-agent decision-theoretic setting, inverse optimal control techniques assume that observed behavior is an approximately optimal solution to an unknown decision problem. These techniques learn a utility function that explains the example behavior and can then be used to accurately predict or imitate future behavior in similar observed or unobserved situations. In this work, we consider similar tasks in competitive and cooperative multi-agent domains. Here, unlike single-agent settings, a player cannot myopically maximize its reward --- it must speculate on how the other agents may act to influence the game's outcome. Employing the game-theoretic notion of regret and the principle of maximum entropy, we introduce a technique for predicting and generalizing behavior, as well as recovering a reward function in these domains.