TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Localization

  • 3-D Mutual Localization with Anonymous Bearing Measurements Authors: Cognetti, Marco; Stegagno, Paolo; Franchi, Antonio; Oriolo, Giuseppe; Buelthoff, Heinrich H.
    We present a decentralized algorithm for estimating mutual 3-D poses in a group of mobile robots, such as a team of UAVs. Our algorithm uses bearing measurements reconstructed, e.g., by a visual sensor, and inertial measurements coming from the robot IMU. Since identification of a specific robot in a group would require visual tagging and may be cumbersome in practice, we simply assume that the bearing measurements are anonymous. The proposed localization method is a non-trivial extension of our previous algorithm for the 2-D case, and exhibits similar performance and robustness. An experimental validation of the algorithm has been performed using quadrotor UAVs.
  • Online Model Estimation of Ultra-Wideband TDOA Measurements for Mobile Robot Localization Authors: Prorok, Amanda; Gonon, Lukas; Martinoli, Alcherio
    Ultra-wideband (UWB) localization is a recent technology that promises to outperform many indoor localization methods currently available. Yet, non-line-of-sight (NLOS) positioning scenarios can create large biases in the time-difference-of-arrival (TDOA) measurements, and must be addressed with accurate measurement models in order to avoid significant localization errors. In this work, we first develop an efficient, closed-form TDOA error model and analyze its estimation characteristics by calculating the Cramer-Rao lower bound (CRLB). We subsequently detail how an online Expectation Maximization (EM) algorithm is adopted to find an elegant formalism for the maximum likelihood estimate of the model parameters. We perform real experiments on a mobile robot equipped with an UWB emitter, and show that the online estimation algorithm leads to excellent localization performance due to its ability to adapt to the varying NLOS path conditions over time.
  • Orientation Only Loop-Closing with Closed-Form Trajectory Bending Authors: Dubbelman, Gijs; Browning, Brett; Hansen, Peter; Dias, M. Bernardine
    In earlier work closed-form trajectory bending was shown to provide an efficient and accurate out-of-core solution for loop-closing exactly sparse trajectories. Here we extend it to fuse exactly sparse trajectories, obtained from relative pose estimates, with absolute orientation data. This allows us to close-the-loop using absolute orientation data only. The benefit is that our approach does not rely on the observations from which the trajectory was estimated nor on the probabilistic links between poses in the trajectory. It therefore is highly efficient. The proposed method is compared against regular fusion and an iterative trajectory bending solution using a 5 km long urban trajectory. Proofs concerning optimality of our method are provided.
  • Capping Computation Time and Storage Requirements for Appearance-Based Localization with CAT− SLAM Authors: Maddern, William; Milford, Michael J; Wyeth, Gordon
    Appearance-based localization is increasingly used for loop closure detection in metric SLAM systems. Since it relies only upon the appearance-based similarity between images from two locations, it can perform loop closure regardless of accumulated metric error. However, the computation time and memory requirements of current appearance-based methods scale linearly not only with the size of the environment but also with the operation time of the platform. These properties impose severe restrictions on long-term autonomy for mobile robots, as loop closure performance will inevitably degrade with increased operation time. We present a set of improvements to the appearance-based SLAM algorithm CAT-SLAM to constrain computation scaling and memory usage with minimal degradation in performance over time. The appearance-based comparison stage is accelerated by exploiting properties of the particle observation update, and nodes in the continuous trajectory map are removed according to minimal information loss criteria. We demonstrate constant time and space loop closure detection in a large urban environment with recall performance exceeding FAB-MAP by a factor of 3 at 100% precision, and investigate the minimum computational and memory requirements for maintaining mapping performance.
  • Improving the Accuracy of EKF-Based Visual-Inertial Odometry Authors: Li, Mingyang; Mourikis, Anastasios
    In this paper, we perform a rigorous analysis of EKF-based visual-inertial odometry (VIO) and present a method for improving its performance. Specifically, we examine the properties of EKF-based VIO, and show that the standard way of computing Jacobians in the filter inevitably causes inconsistency and loss of accuracy. This result is derived based on an observability analysis of the EKF's linearized system model, which proves that the yaw erroneously appears to be observable. In order to address this problem, we propose modifications to the multi-state constraint Kalman filter (MSCKF) algorithm, which ensure the correct observability properties without incurring additional computational cost. Extensive simulation tests and real-world experiments demonstrate that the modified MSCKF algorithm outperforms competing methods, both in terms of consistency and accuracy.

RGB-D Localization and Mapping

  • Efficient Scene Simulation for Robust Monte Carlo Localization Using an RGB-D Camera Authors: Fallon, Maurice; Johannsson, Hordur; Leonard, John
    This paper presents Kinect Monte Carlo Localization (KMCL), a new method for localization in three dimensional indoor environments using RGB-D cameras, such as the Microsoft Kinect. The approach makes use of a low fidelity a priori 3-D model of the area of operation composed of large planar segments, such as walls and ceilings, which are assumed to remain static. Using this map as input, the KMCL algorithm employs feature-based visual odometry as the particle propagation mechanism and utilizes the 3-D map and the underlying sensor image formation model to efficiently simulate RGB-D camera views at the location of particle poses, using a graphical processing unit (GPU). The generated 3D views of the scene are then used to evaluate the likelihood of the particle poses. This GPU implementation provides a factor of ten speedup over a pure distance-based method, yet provides comparable accuracy. Experimental results are presented for five different configurations, including: (1) a robotic wheelchair, (2) a sensor mounted on a person, (3) an Ascending Technologies quadrotor, (4) a Willow Garage PR2, and (5) an RWI B21 wheeled mobile robot platform. The results demonstrate that the system can perform robust localization with 3D information for motions as fast as 1.5 meters per second. The approach is designed to be applicable not just for robotics but other applications such as wearable computing.
  • Robust Egomotion Estimation Using ICP in Inverse Depth Coordinates Authors: Lui, Wen Lik Dennis; Tang, Titus Jia Jie; Drummond, Tom; Li, Wai Ho
    This paper presents a 6 degrees of freedom egomotion estimation method using Iterative Closest Point (ICP) for low cost and low accuracy range cameras such as the Microsoft Kinect. Instead of Euclidean coordinates, the method uses inverse depth coordinates which better conforms to the error characteristics of raw sensor data. Novel inverse depth formulations of point-to-point and point-to-plane error metrics are derived as part of our implementation. The implemented system runs in real time at an average of 28 frames per second (fps) on a standard computer. Extensive experiments were performed to evaluate different combinations of error metrics and parameters. Results show that our system is accurate and robust across a variety of motion trajectories. The point-to-plane error metric was found to be the best at coping with large inter-frame motion while remaining accurate and maintaining real time performance.
  • Online Egomotion Estimation of RGB-D Sensors Using Spherical Harmonics Authors: Osteen, Philip; Owens, Jason; Kessens, Chad C.
    We present a technique to estimate the egomotion of an RGB-D sensor based on rotations of functions defined on the unit sphere. In contrast to traditional approaches, our technique is not based on image features and does not require correspondences to be generated between frames of data. Instead, consecutive functions are correlated using spherical harmonic analysis. An Extended Gaussian Image (EGI), created from the local normal estimates of a point cloud, defines each function. Correlations are efficiently computed using Fourier transformations, resulting in a 3 Degree of Freedom (3-DoF) rotation estimate. An Iterative Closest Point (ICP) process then refines the initial rotation estimate and adds a translational component, yielding a full 6-DoF egomotion estimate. The focus of this work is to investigate the merits of using spherical harmonic analysis for egomotion estimation by comparison with alternative 6-DoF methods. We compare the performance of the proposed technique with that of stand-alone ICP and image feature based methods. As with other egomotion techniques, estimation errors accumulate and degrade results, necessitating correction mechanisms for robust localization. For this report, however, we use the raw estimates; no filtering or smoothing processes are applied. In-house and external benchmark data sets are analyzed for both runtime and accuracy. Results show that the algorithm is competitive in terms of both accuracy and runtime, and future work will aim to
  • Incremental Registration of RGB-D Images Authors: Dryanovski, Ivan; Jaramillo, Carlos; Xiao, Jizhong
    An RGB-D camera is a sensor which outputs range and color information about objects. Recent technological advances in this area have introduced affordable RGB-D devices in the robotics community. In this paper, we present a real-time technique for 6-DoF camera pose estimation through the incremental registration of RGB-D images. First, a set of edge features are computed from the depth and color images. An initial motion estimation is calculated through aligning the features. This initial guess is refined by applying the Iterative Closest Point algorithm on the dense point cloud data. A rigorous error analysis assesses several sets of RGB-D ground truth data via an error accumulation metric. We show that the proposed two-stage approach significantly reduces error in the pose estimation, compared to a state-of-the-art ICP registration technique.
  • An Evaluation of the RGB-D SLAM System Authors: Endres, Felix; Hess, Juergen Michael; Engelhard, Nikolas; Sturm, Jürgen; Cremers, Daniel; Burgard, Wolfram
    We present an approach to simultaneous localization and mapping (SLAM) for RGB-D cameras like the Microsoft Kinect. Our system concurrently estimates the trajectory of a hand-held Kinect and generates a dense 3D model of the environment. We present the key features of our approach and evaluate its performance thoroughly on a recently published dataset, including a large set of sequences of different scenes with varying camera speeds and illumination conditions. In particular, we evaluate the accuracy, robustness, and processing time for three different feature descriptors (SIFT, SURF, and ORB). The experiments demonstrate that our system can robustly deal with difficult data in common indoor scenarios while being fast enough for online operation. Our system is fully available as open-source.
  • Depth Camera Based Indoor Mobile Robot Localization and Navigation Authors: Biswas, Joydeep; Veloso, Manuela
    The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the "plane filtered" points) or points that do not correspond to planes within a specified error margin (the "outlier" points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates(30Hz) with low CPU requirements(16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data.

Micro and Nano Robots II

  • Motion Control of Tetrahymena Pyriformis Cells with Artificial Magnetotaxis: Model Predictive Control (MPC) Approach Authors: Ou, Yan; Kim, Dal Hyung; Kim, Paul; Kim, MinJun; Julius, Agung
    The use of live microbial cells as microscale robots is an attractive premise, primarily because they are easy to produce and to fuel. In this paper, we study the motion control of magnetotactic Tetrahymena pyriformis cells. Magnetotactic T. pyriformis is produced by introducing artificial magnetic dipole into the cells. Subsequently, they can be steered by using an external magnetic field. We observe that the external magnetic field can only be used to affect the swimming direction of the cells, while the swimming velocity depends largely on the cells’ own propulsion. Feedback information for control is obtained from a computer vision system that tracks the cell. The contribution of this paper is twofold. First, we construct a discrete-time model for the cell dynamics that is based on first principle. Subsequently, we identify the model parameters using the Least Squares approach. Second, we formulate a model predictive approach for feedback control of magnetotactic T. pyriformis. Both the model fitness and the performance of the feedback controller are verified using experimental data.
  • Robust H-Infinity Control for Electromagnetic Steering of Microrobots Authors: Marino, Hamal; Bergeles, Christos; Nelson, Bradley J.
    Electromagnetic systems for in vivo microrobot steering have the potential to enable new types of localized and minimally invasive interventions. Accurate control of microrobots in natural fluids requires precise, high-bandwidth localization and accurate knowledge of the steering system’s parameters. However, current in vivo imaging methodologies, such as fluoroscopy, must be used at low update rates to minimize radiation exposure. Low frame rates introduce localization uncertainties. Additionally, the parameters of the electromagnetic steering system are estimated with inaccuracies. These uncertainties can be addressed with robust H-infinity control, which is investigated in this paper. The controller is based on a linear uncertain dynamical model of the steering system and microrobot. Simulations show that the proposed control scheme accounts for modeling uncertainties, and that the controller can be used for servoing in low viscosity fluids using low frame rates. Experiments in a prototype electromagnetic steering system support the simulations.
  • Magnetic Dragging of Vascular Obstructions by Means of Electrostatic and Antibody Binding Authors: Khorami Llewellyn, Maral; Dario, Paolo; Menciassi, Arianna; Sinibaldi, Edoardo
    Exploitation of miniature robots and microrobots for endovascular therapeutics is a promising approach; besides chemical strategies (typically systemic), topical mechanical approaches exist for obstruction removal, which however produce harmful debris for blood circulation. Magnetic particles (MPs) are also studied for blood clot targeting. We investigated magnetic dragging of clots/debris by means of both electrostatic and antibody binding. We successfully produced magnetotactic blood clots in vitro and experimentally showed that they can be effectively dragged within a fluidic channel. We also exploited a magnetic force model in order to quantitatively analyze the experimental results, up to obtaining an estimate of the relative efficiency between electrostatic and antibody binding. Our study takes a first step towards more realistic in vivo investigations, in view of integration into microrobotic approaches to vascular obstructions removal.
  • Coordination of Droplets on Light-Actuated Digital Microfluidic Systems Authors: Ma, Zhiqiang; Akella, Srinivas
    In this paper we explore the problem of coordinating multiple droplets in light-actuated digital microfluidic systems intended for use as lab-on-a-chip systems. In a light actuated digital microfluidic system, droplets of chemicals are actuated on a photosensitive chip by moving projected light patterns. Our goal is to perform automated manipulation of multiple droplets in parallel on a microfluidic platform. To achieve collision-free droplet coordination while optimizing completion times, we apply multiple robot coordination techniques. We present a mixed integer linear programming formulation for coordinating droplets given their paths. This approach permits arbitrary droplet formations, and coordination of both individual droplets and batches of droplets. We then present a linear time stepwise approach for batch coordination of droplet matrix layouts.
  • Mobility and Kinematic Analysis of a Novel Dexterous Micro Gripper Authors: Xiao, Shunli; Li, Yangmin
    The paper presents the design and analysis of a dexterous micro-gripper with two fingers and each finger has 2-DOF translational movement function. The two fingers can move independently in hundreds of microns' range, and can cooperate with each other to realize complex operation for micro objects. The mobility characteristics and the inverse parallel kinematic model of a single finger are analyzed by resorting to screw theory and compliance and stiffness matrix method, which are validated by finite-element analysis (FEA). Both FEA and the theoretical model have well validated the movement of the fingers moving in translational way, the designed micro gripper can realize a lot of complex functions. Properly selecting the amplification ratio and the stroke of the PZT, we can mount the gripper onto a positioning stage to realize a larger motion range, which will make it be widely used in micro parts assembly and bio-operation systems.