• Tutorial: Motion Planning for Dynamic Environments
    This all-day tutorial introduces the audience to motion planning algorithms and associated mathematical concepts. Prior experience in this area is not assumed. The morning part starts from the basics of collision-free path planning by introducing geometric representations, transformations, configuration spaces, sampling-based motion planning, and combinatorial motion planning. The afternoon part covers methods that address many concerns which arise in practice when the robot's environment is changing or is incompletely specified. This falls under the heading of planning in dynamic environments. Fundamental limitations of planning in this context are discussed, followed by a survey of several successful approaches from specific contexts, such as planning for humanoids, autonomous vehicles, and virtual agents.
  • Tutorial: Advanced 3D Point Cloud Processing with Point Cloud Library (PCL)
    Point clouds are one of the most fascinating and challenging sensor streams, leading to countless publications. The advent of low cost 3D cameras, such as the Microsoft Kinect, has led to a wide range of new ideas and projects in this field. The PCL community tries to bring together all these activities to produce one open source library. Backed up by leading institutions and researchers around the world, as well as dedicated senior level programmers, this gives us the opportunity to join all the loose ends in point cloud processing. The point cloud library gives every researcher the opportunity to try new ideas fast as well as discuss them with and get support from a big community. Most of this is done through electronic communication, including mailing lists and chat systems, but to share it with the broader robotics community, as well as to get more people involved, we propose a one day tutorial. We will give an introduction to the library, guide the attendees in their first steps using it, as well as show what great results have been achieved with it already. PCL is a truly open community with a low administrative structure. We have especially designed our documentation to guide new users and have created help channels to give them the opportunity to rapidly become contributors.
  • Tutorial: Reinforcement Learning for Robotics and Control
    This all-day tutorial introduces the audience to reinforcement learning. Prior experience in this area is not assumed. In the first half of this tutorial we will cover the foundations of reinforcement learning: Markov decision processes, value iteration, policy iteration, linear programming for solving an MDP, function approximation, model-free versus model-based learning, Q-learning, TD-learning, policy search, the likelihood ratio policy gradient, the policy gradient theorem, actor-critic, natural gradient and importance sampling. In the second half of this tutorial we will discuss example success stories and open problems.