TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Visual Tracking

  • Generic Realtime Kernel Based Tracking Authors: Hadj-Abdelkader, Hicham; Mezouar, Youcef; Chateau, Thierry
    This paper deals with the design of a generic visual tracking algorithm suitable for a large class of camera (single viewpoint sensors). It is based on the estimation of the relationship between observations and motion on the sphere. This is efficiently achieved using a kernel-based regression function on a generic linearly-weighted sum of non-linear basis functions. We also present two set of experiments. The first one shows the efficiency of our algorithm through the tracking in video sequences acquired with three types of cameras (conventional, dioptric-fisheye and catadioptric). The real-time performances will be shown by tracking one or several planes. The second set of experiments presents an application of our tracking algorithm to visual servoing with a fisheye camera.
  • Generative Object Detection and Tracking in 3D Range Data Authors: Kaestner, Ralf; Maye, Jerome; Pilat, Yves; Siegwart, Roland
    This paper presents a novel approach to tracking dynamic objects in 3D range data. Its key contribution lies in the generative object detection algorithm which allows the tracker to robustly extract objects of varying sizes and shapes from the observations. In contrast to tracking methods using discriminative detectors, we are thus able to generalize over a wide range of object classes matching our assumptions. Whilst the generative model underlying our framework inherently scales with the complexity and the noise characteristics of the environment, all parameters involved in the detection process obey a clean probabilistic interpretation. Nevertheless, our unsupervised object detection and tracking algorithm achieves real-time performance, even in highly dynamic scenarios covering a significant amount of moving objects. Through an application to populated urban settings, we are able to show that the tracking performance of the presented approach yields results which are comparable to state-of-the-art discriminative methods.
  • Moving Vehicle Detection and Tracking in Unstructured Environments Authors: Wojke, Nicolai; Häselich, Marcel
    The detection and tracking of moving vehicles is a necessity for collision-free navigation. In natural unstructured environments, motion-based detection is challenging due to low signal to noise ratio. This paper describes our approach for a 14 km/h fast autonomous outdoor robot that is equipped with a Velodyne HDL-64E S2 for environment perception. We extend existing work that has proven reliable in urban environments. To overcome the unavailability of road network information for background separation, we introduce a foreground model that incorporates geometric as well as temporal cues. Local shape estimates successfully guide vehicle localization. Extensive evaluation shows that the system works reliable and efficient in various outdoor scenarios without any prior knowledge about the road network. Experiments with our own sensor as well as on publicly available data from the DARPA Urban Challenge revealed more than 96 % correctly identified vehicles.
  • Learning to Place New Objects Authors: Jiang, Yun; Zheng, Changxi; Lim, Marcus; Saxena, Ashutosh
    The ability to place objects in an environment is an important skill for a personal robot. An object should not only be placed stably, but should also be placed in its preferred location/orientation. For instance, it is preferred that a plate be inserted vertically into the slot of a dish-rack as compared to being placed horizontally in it. Unstructured environments such as homes have a large variety of object types as well as of placing areas. Therefore our algorithms should be able to handle placing new object types and new placing areas. These reasons make placing a challenging manipulation task. In this work, we propose using a supervised learning approach for finding good placements given point-clouds of the object and the placing area. Our method combines the features that capture support, stability and preferred configurations, and uses a shared sparsity structure in its the parameters. Even when neither the object nor the placing area is seen previously in the training set, our learning algorithm predicts good placements. In robotic experiments, our method enables the robot to stably place known objects with a 98% success rate and 98% when also considering semantically preferred orientations. In the case of placing a new object into a new placing area, the success rate is 82% and 72%.
  • Lost in Translation (and Rotation): Rapid Extrinsic Calibration for 2D and 3D LIDARs Authors: Maddern, William; Harrison, Alastair; Newman, Paul
    This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from Rényi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations.