TechTalks from event: 2nd Workshop on Semantic Perception, Mapping and Exploration (SPME)

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

  • “Advanced 3D Point Cloud Processing with Point Cloud Library (PCL) Authors: M. Dixon, A. Ichim, Z. Marton, R. Rusu, J. Sprickerhof and A. Trevor
    Point clouds are one of the most fascinating and challenging sensor streams, leading to countless publications. The advent of low cost 3D cameras, such as the Microsoft Kinect, has led to a wide range of new ideas and projects in this field. The PCL community tries to bring together all these activities to produce one open source library. Backed up by leading institutions and researchers around the world, as well as dedicated senior level programmers, this gives us the opportunity to join all the loose ends in point cloud processing. The point cloud library gives every researcher the opportunity to try new ideas fast as well as discuss them with and get support from a big community. Most of this is done through electronic communication, including mailing lists and chat systems, but to share it with the broader robotics community, as well as to get more people involved, we propose a one day tutorial. We will give an introduction to the library, guide the attendees in their first steps using it, as well as show what great results have been achieved with it already. PCL is a truly open community with a low administrative structure. We have especially designed our documentation to guide new users and have created help channels to give them the opportunity to rapidly become contributors.
  • Techniques for Object Recognition from Range Data Authors: Wolfram Burgard
    In this talk we address the problem of object recognition in 3D point cloud data. We first present a novel interest point extraction method that operates on range images generated from arbitrary 3D point clouds. Our approach explicitly considers the borders of the objects according transitions from foreground to background. We furthermore introduce a corresponding feature descriptor. We present rigorous experiments in which we analyze the usefulness our method for object detection. We furthermore describe a novel algorithm for constructing a compact representation of 3D point clouds. Our approach extracts an alphabet of local scans from the scene. The words of this alphabet are then used to replace recurrent local 3D structures, which leads to a substantial compression of the entire point cloud. We optimize our model in terms of complexity and accuracy by minimizing the Bayesian information criterion (BIC). Experimental evaluations on large real-world data show that our method allows us to accurately reconstruct environments with as few as 70 words. We finally discuss how this method can be utilized for object recognition and loop closure detection ind SLAM (Simultaneous Localization and Mapping).
  • Real-time 3D Stereo Mapping in Complex Dynamic Environments Authors: Max Bajracharya, Jeremy Ma, Andrew Howard and Larry Matthies
  • Features for RGB-D Object Recognition Authors: Dieter Fox
    Features are important components of object recognition systems. The combination of color and depth information given by RGB-D cameras provides opportunities for the development of improved features. In this talk, I will discuss our recent work on learning features for object recognition. Kernel descriptors provide a flexible framework for incorporating manually designed point features. Hierarchical matching pursuit uses sparse coding to learn features from raw, unlabeled RGB-D data. Both approaches achieve high accuracy on RGB-D object recognition tasks.
  • Object Categorization in the Sink: Learning Behavior-Grounded Object Categories with Water Authors: Shane Griffith, Vladimir Sukhoy, Todd Wegter and Alexander Stoytchev
  • Object Persistence in 3D for Home Robots Authors: Parnian Alimi, David Meger and James Little
  • Introduction Authors: Workshop organizers
  • "Robot Bring Me Something to Drink From": Object Representation forTransferring Task Specific Grasps Authors: Marianna Madry, Dan Song, Carl Henrik Ek and Danica Kragic
  • Exploiting Semantics in Mobile Robotics Authors: Andrzej Pronobis, Alper Aydemir, Kristoffer Sj and Patric Jensfelt
    Robots have finally escaped from industrial workplaces and made their way into our homes, offices and public spaces. In order to realize the dream of robot assistants performing human tasks together with humans in a seamless fashion, we need to provide them with the fundamental capability of understanding complex and unstructured environments. In this talk, we will provide an overview of our recent work on semantic spatial understanding and exploiting semantic knowledge for generating more informed and efficient robot behavior in human environments. We will start by presenting our spatial knowledge modeling framework and continue with methods of acquiring semantic world descriptions, abstracting and reasoning about object locations, topology and segments of space. Finally, we will show that semantic knowledge can indeed improve performance of a robot on the task of large-scale object search and make the robot's behavior much more intuitive and human-like.