TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Calibration and Identification

  • Automatic Camera and Range Sensor Calibration using a single Shot Authors: Geiger, Andreas; Moosmann, Frank; Car, Ömer; Schuster, Bernhard
    As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions.
  • Scale-Only Visual Homing from an Omnidirectional Camera Authors: Pradalier, Cedric; Liu, Ming; Pomerleau, Francois; Siegwart, Roland
    Visual Homing is the process by which a mobile robot moves to a Home position using only information extracted from visual data. The approach we present in this paper uses image keypoints (e.g. SIFT) extracted from omnidirectional images and matches the current set of keypoints with the set recorded at the Home location. In this paper, we first formulate three different visual homing problems using uncalibrated omnidirectional camera within the Image Based Visual Servoing (IBVS) framework; then we propose a novel simplified homing approach, which is inspired by IBVS, based only on the scale information of the SIFT features, with its computational cost linear to the number of features. This paper reports on the application of our method on a commonly cited indoor database where it outperforms other approaches. We also briefly present results on a real robot and allude on the integration into a topological navigation framework.
  • 3D Monocular Robotic Ball Catching with an Iterative Trajectory Estimation Refinement Authors: Lippiello, Vincenzo; Ruggiero, Fabio
    In this paper, a 3D robotic ball catching algorithm which employs only an eye-in-hand monocular visual-system is presented. A partitioned visual servoing control is used in order to generate the robot motion, keeping always the ball in the field of view of the camera. When the ball is detected, the camera mounted on the robot end-effector is commanded to follow a suitable baseline in order to acquire measurements and provide a first possible interception point through a linear estimation process. Thereafter, further visual measures are acquired in order to continuously refine the previous prediction through a non-linear estimation process. Experimental results show the effectiveness of the proposed solution.
  • Automatically Calibrating the Viewing Direction of Optic-Flow Sensors Authors: Briod, Adrien; Zufferey, Jean-Christophe; Floreano, Dario
    Because of their low weight, cost and energy consumption, optic-flow sensors attract growing interest in robotics for tasks such as self-motion estimation or depth measurement. Most applications require a large number of these sensors, which involves a fair amount of calibration work for each setup. In particular, the viewing direction of each sensor has to be measured for proper operation. This task is often cumbersome and prone to errors, and has to be carried out every time the setup is slightly modified. This paper proposes an algorithm for viewing direction calibration relying on rate gyroscope readings and a recursive weighted linear least square estimation of the rotation matrix elements. %an iterative algorithm. The method only requires the user to realize random rotational motions of its setup by hand. The algorithm provides hints about the current precision of the estimation and what motions should be performed to improve it. To assess the validity of the method, tests were performed on an experimental setup and the results compared to a precise manual calibration. The repeatability of the gyroscope-based calibration process reached +-1.7° per axis.
  • An Analytical Least-Squares Solution to the Odometer-Camera Extrinsic Calibration Problem Authors: Guo, Chao; Mirzaei, Faraz; Roumeliotis, Stergios
    In order to fuse camera and odometer measurements, we first need to estimate their relative transformation through the so-called odometer-camera extrinsic calibration. In this paper, we present a two-step analytical least-squares solution for the extrinsic odometer-camera calibration that (i) is not iterative and finds the least-squares optimal solution without any initialization, and (ii) does not require any special hardware or the presence of known landmarks in the scene. Specifically, in the first step, we estimate a subset of the 3D relative rotation parameters by analytically minimizing a least-squares cost function. We then back-substitute these estimates in the measurement constraints, and determine the rest of the 3D transformation parameters by analytically minimizing a second least-squares cost function. Simulation and experimental results are presented to validate the efficiency of the proposed algorithm.
  • Online Calibration of Vehicle Powertrain and Pose Estimation Parameters Using Integrated Dynamics Authors: Seegmiller, Neal Andrew; Kelly, Alonzo; Rogers-Marcovitz, Forrest
    This paper presents an online approach to calibrating vehicle model parameters that uses the integrated dynamics of the system. Specifically, we describe the identification of the time constant and delay in a first-order model of the vehicle powertrain, as well as parameters required for pose estimation (including position offsets for the inertial measurement unit, steer angle sensor parameters, and wheel radius). Our approach does not require differentiation of state measurements like classical techniques; making it ideal when only low-frequency measurements are available. Experimental results on the LandTamer and Zoe rover platforms show online calibration using integrated dynamics to be fast and more accurate than both manual and classical calibration methods.