TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Localization II

  • Road Vehicle Localization with 2D Push-Broom Lidar and 3D Priors Authors: Baldwin, Ian Alan; Newman, Paul
    In this paper we describe and demonstrate a method for precisely localizing a road vehicle using a single push-broom 2D laser scanner while leveraging a prior 3D survey. In contrast to conventional scan matching, our laser is oriented downwards, thus causing continual ground strike. Our method exploits this to produce a small 3D swathe of laser data which can be matched statistically within the 3D survey. This swathe generation is predicated upon time varying estimates of vehicle velocity. While in theory this data could be obtained from vehicle speedometers, in reality these instruments are biased and so we also provide a way to estimate this bias from survey data. We show that our low cost system consistently outperforms a high calibre integrated DGPS/IMU system over 26 km of driven path around a test site.
  • Radar-Only Localization and Mapping for Ground Vehicle at High Speed and for Riverside Boat Authors: VIVET, DAMIEN; Checchin, Paul; CHAPUIS, Roland
    The use of a rotating range sensor in high speed robotics creates distortions in the collected data. Such an effect is, in the majority of studies, ignored or considered as a noise and then corrected, based on proprioceptive sensors or localization systems. In this study we consider that distortion contains the information about the vehicle's displacement. We propose to extract this information from distortion without any other information than exteroceptive sensor data. The only sensor used for this work is a panoramic Frequency Modulated Continuous Wave (FMCW) radar called K2Pi. No odometer, gyrometer or other proprioceptive sensor is used. The idea is to resort to velocimetry by analyzing the distortion of the measurements. As a result, the linear and angular velocities of the mobile robot are estimated and used to build, without any other sensor, the trajectory of the vehicle and then the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle and a riverbank application. This work can easily be extended to other slow rotating range sensors.
  • LAPS - Localisation using Appearance of Prior Structure: 6-DoF Monocular Camera Localisation using Prior Pointclouds Authors: Stewart, Alex; Newman, Paul
    This paper is about pose estimation using monocular cameras with a 3D laser pointcloud as a workspace prior. We have in mind autonomous transport systems in which low cost vehicles equipped with monocular cameras are furnished with preprocessed 3D lidar workspaces surveys. Our inherently cross-modal approach offers robustness to changes in scene lighting and is computationally cheap. At the heart of our approach lies inference of camera motion by minimisation of the Normalised Information Distance (NID) between the appearance of 3D lidar data reprojected into overlapping images. Results are presented which demonstrate the applicability of this approach to the localisation of a camera against a lidar pointcloud using data gathered from a road vehicle.
  • An Outdoor High-Accuracy Local Positioning System for an Autonomous Robotic Golf Greens Mower Authors: Smith, Aaron; Chang, H. Jacky; Blanchard, Edward
    This paper presents a high-accuracy local positioning system (LPS) for an autonomous robotic greens mower. The LPS uses a sensor tower mounted on top of the robot and four active beacons surrounding a target area. The proposed LPS concurrently determines robot location using a lateration technique and calculates orientation using angle measurements. To perform localization, the sensor tower emits an ultrasonic pulse that is received by the beacons. The time of arrival is measured by each beacon and transmitted back to the sensor tower. To determine the robot’s orientation, the sensor tower has a circular receiver array that detects infrared signals emitted by each beacon. Using the direction and strength of the received infrared signals, the relative angles to each beacon are obtained and the robot orientation can be determined. Experimental data show that the LPS achieves a position accuracy of 3.1 cm RMS, and an orientation accuracy of 0.23° RMS. Several prototype robotic mowers utilizing the proposed LPS have been deployed for field testing, and the mowing results are comparable to an experienced professional human worker.
  • Curb-Intersection Feature Based Monte Carlo Localization on Urban Roads Authors: Qin, Baoxing; Chong, Zhuang Jie; Bandyopadhyay, Tirthankar; Ang Jr, Marcelo H; Frazzoli, Emilio; Rus, Daniela
    One of the most prominent features on an urban road is the curb, which defines the boundary of a road surface. An intersection is a junction of two or more roads, appearing where no curb exists. The combination of curb and intersection features and their idiosyncrasies carry significant information about the urban road network that can be exploited to improve a vehicle's localization. This paper introduces a Monte Carlo Localization (MCL) method using the curb-intersection features on urban roads. We propose a novel idea of "Virtual LIDAR" to get the measurement models for these features. Under the MCL framework, above road observation is fused with odometry information, which is able to yield precise localization. We implement the system using a single tilted 2D LIDAR on our autonomous test bed and show robust performance in the presence of occlusion from other vehicles and pedestrians.
  • Satellite Image Based Precise Robot Localization on Sidewalks Authors: Senlet, Turgay; Elgammal, Ahmed
    In this paper, we present a novel computer vision framework for precise localization of a mobile robot on sidewalks. In our framework, we combine stereo camera images, visual odometry, satellite map matching, and a sidewalk probability transfer function obtained from street maps in order to attain globally corrected localization results. The framework is capable of precisely localizing a mobile robot platform that navigates on sidewalks, without the use of traditional wheel odometry, GPS or INS inputs. On a complex 570-meter sidewalk route, we show that we obtain superior localization results compared to visual odometry and GPS.