TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Marine Robotics I

  • Towards Improving Mission Execution for Autonomous Gliders with an Ocean Model and Kalman Filter Authors: Smith, Ryan N.; Kelly, Jonathan; Sukhatme, Gaurav
    Effective execution of a planned path by an underwater vehicle is important for proper analysis of the gathered science data, as well as to ensure the safety of the vehicle during the mission. Here, we propose the use of an unscented Kalman filter to aid in determining how the planned mission is executed. Given a set of waypoints that define a planned path and a dicretization of the ocean currents from a regional ocean model, we present an approach to determine the time interval at which the glider should surface to maintain a prescribed tracking error, while also limiting its time on the ocean surface. We assume practical mission parameters provided from previous field trials for the problem set up, and provide the simulated results of the Kalman filter mission planning approach. The results are initially compared to data from prior field experiments in which an autonomous glider executed the same path without pre-planning. Then, the results are validated through field trials with multiple autonomous gliders implementing different surfacing intervals simultaneously while following the same path.
  • Position and Velocity Filters for Intervention AUVs Based on Single Range and Depth Measurements Authors: Viegas, Daniel; Batista, Pedro; Oliveira, Paulo; Silvestre, Carlos
    This paper proposes novel cooperative navigation solutions for an Intervention Autonomous Underwater Vehicle (I-AUV) working in tandem with an Autonomous Surface Craft (ASC). The I-AUV is assumed to be moving in the presence of constant unknown ocean currents, and aims to estimate its position relying on measurements of its range to the ASC and of its depth relatively to the sea level. Two different scenarios are considered: in one, the ASC transmits its position and velocity to the I-AUV, while in the other the ASC transmits only its position, and the I-AUV has access to measurements of its velocity relative to the ASC. A sufficient condition for observability and a method for designing state observers with Globally Asymptotically Stable (GAS) error dynamics are presented for both problems. Finally, simulation results are included and discussed to assess the performance of the proposed solutions in the presence of measurement noise.
  • Uncertainty-Driven View Planning for Underwater Inspection Authors: Hollinger, Geoffrey; Englot, Brendan; Hover, Franz; Mitra, Urbashi; Sukhatme, Gaurav
    We discuss the problem of inspecting an underwater structure, such as a submerged ship hull, with an autonomous underwater vehicle (AUV). In such scenarios, the goal is to construct an accurate 3D model of the structure and to detect any anomalies (e.g., foreign objects or deformations). We propose a method for constructing 3D meshes from sonar-derived point clouds that provides watertight surfaces, and we introduce uncertainty modeling through non-parametric Bayesian regression. Uncertainty modeling provides novel cost functions for planning the path of the AUV to minimize a metric of inspection performance. We draw connections between the resulting cost functions and submodular optimization, which provides insight into the formal properties of active perception problems. In addition, we present experimental trials that utilize profiling sonar data from ship hull inspection.
  • Formation Control of Underactuated Autonomous Surface Vessels Using Redundant Manipulator Analogs Authors: Bishop, Bradley
    In this paper, we present a method utilizing redundant manipulator analogs for formation control of underactuated autonomous surface vessels (ASVs) with realistic turning constraints and dynamics. The method used relies on casting the swarm as a single entity and utilizing redundant manipulator techniques to guarantee task-level formation control as well as obstacle avoidance and secondary tasks such as mean position control. The method presented differs from other approaches in that the units herein represent a larger class of ASVs with realistic limitations on vessel motions and that the exact position of each of the units on the formation profile is not specified.
  • Delayed State Information Filter for USBL-Aided AUV Navigation Authors: Ribas, David; Ridao, Pere; Mallios, Angelos; Palomeras, Narcis
    This paper presents a navigation system for an Autonomous Underwater Vehicle (AUV) which merges standard dead reckoning navigation data with absolute position fixes from an Ultra-Short Base Line (USBL) system. Traditionally, the USBL transceiver is located on the surface, which makes necessary to feed the position fixes back to the AUV by means of an acoustic modem. An Information filter, which maintains a bounded circular buffer of past vehicle poses, is in charge of the sensor data fusion while dealing with de delays induced by the acoustic communication. The method is validated using a data set gathered for a dam inspection task.
  • Miniature Underwater Glider: Design, Modeling, and Experimental Results Authors: Zhang, Feitian; Thon, John; Thon, Cody; Tan, Xiaobo
    The concept of gliding robotic fish combines gliding and fin-actuation mechanisms to realize energy-efficient locomotion and high maneuverability, and holds strong promise for mobile sensing in versatile aquatic environments. In this paper we present the modeling and design of a miniature fish-like glider, a key enabling component for gliding robotic fish. The full dynamics of the glider is first derived and then reduced to the sagittal plane, where the lift, drag, and pitch moment coefficients are obtained as linear or quadratic functions of the attack angle based on computational fluid dynamics (CFD) analysis. The model is used to design the glider by accommodating stringent constraints on dimensions yet meeting the desired specification on speed. A fully untethered prototype of underwater glider is developed, with a weight of 4 kg and length of 40 cm. With a net buoyancy of 20 g, it realizes a steady gliding speed of 20 cm/s. The volume and net buoyancy of this glider are less than 10% and 5%, respectively, of those of reported gliders in the literature, and its speed per unit net buoyancy is over 9 times of those other vehicles. Experimental results have shown that the model is able to capture well both the steady glide behavior under different control inputs, and the dynamics during transients.

Autonomy and Vision for UAVs

  • Cooperative Vision-Aided Inertial Navigation Using Overlapping Views Authors: Melnyk, Igor; Hesch, Joel; Roumeliotis, Stergios
    In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both robots. In contrast to existing CL methods, which require distance and/or bearing robot-to-robot observations, our algorithm infers the relative position and orientation (pose) of the robots using only the visual observations of common features in the scene. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.
  • UAV Vision: Feature Based Accurate Ground Target Localization through Propagated Initializations and Interframe Homographies Authors: Han, Kyuseo; Aeschliman, Chad; Park, Johnny; Kak, Avinash; Kwon, Hyukseong; Pack, Daniel
    Our work presents solutions to two related vexing problems in feature-based localization of ground targets in Unmanned Aerial Vehicle(UAV) images: (i) A good initial guess at the pose estimate that would speed up the convergence to the final pose estimate for each image frame in a video sequence; and (ii) Time-bounded estimation of the position of the ground target. We address both these problems within the framework of the ICP (Iterative Closest Point) algorithm that now has a rich tradition of usage in computer vision and robotics applications. We solve the first of the two problems by frame-to-frame propagation of the computed pose estimates for the purpose of the initializations needed by ICP. The second problem is solved by terminating the iterative estimation process at the expiration of the available time for each image frame. We show that when frame-to-frame homography is factored into the iterative calculations, the accuracy of the position calculated at the time of bailing out of the iterations is nearly always sufficient for the goals of UAV vision.
  • First Results in Autonomous Landing and Obstacle Avoidance by a Full-Scale Helicopter Authors: Scherer, Sebastian; Chamberlain, Lyle; Singh, Sanjiv
    Currently deployed unmanned rotorcraft rely on carefully preplanned missions and operate from prepared sites and thus avoid the need to perceive and react to the environment. Here we consider the problems of finding suitable but previously unmapped landing sites given general coordinates of the goal and planning collision free trajectories in real time to land at the “optimal” site. This requires accurate mapping, fast landing zone evaluation algorithms, and motion planning. We report here on the sensing, perception and motion planning integrated onto a full-scale helicopter that flies completely autonomously. We show results from 8 experiments for landing site selection and 5 runs at obstacles. These experiments have demonstrated the first autonomous full-scale helicopter that successfully selects its own landing sites and avoids obstacles.
  • Real-Time Onboard Visual-Inertial State Estimation and Self-Calibration of MAVs in Unknown Environments Authors: Weiss, Stephan; Achtelik, Markus W.; Lynen, Simon; Chli, Margarita; Siegwart, Roland
    The combination of visual and inertial sensors has proved to be very popular in MAV navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. We propose a navigation algorithm for MAVs equipped with a single camera and an IMU which is able to run onboard and in real-time. The main focus is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40 Hz on an onboard Atom computer 1.6 GHz.
  • Autonomous Landing of a VTOL UAV on a Moving Platform Using Image-Based Visual Servoing Authors: Lee, Daewon; Ryan, Tyler; Kim, H. Jin
    In this paper we describe a vision-based algorithm to control a vertical-takeoff-and-landing unmanned aerial vehicle while tracking and landing on a moving platform. Specifically, we use image-based visual servoing (IBVS) to track the platform in two-dimensional image space and generate a velocity reference command used as the input to an adaptive sliding mode controller. Compared with other vision-based control algorithms that reconstruct a full three-dimensional representation of the target, which requires precise depth estimation, IBVS is computationally cheaper since it is less sensitive to the depth estimation allowing for a faster method to obtain this estimate. To enhance velocity tracking of the sliding mode controller, an adaptive rule is described to account for the ground effect experienced during the maneuver. Finally, the IBVS algorithm integrated with the adaptive sliding mode controller for tracking and landing is validated in an experimental setup using a quadrotor.

RGB-D Localization and Mapping

  • Efficient Scene Simulation for Robust Monte Carlo Localization Using an RGB-D Camera Authors: Fallon, Maurice; Johannsson, Hordur; Leonard, John
    This paper presents Kinect Monte Carlo Localization (KMCL), a new method for localization in three dimensional indoor environments using RGB-D cameras, such as the Microsoft Kinect. The approach makes use of a low fidelity a priori 3-D model of the area of operation composed of large planar segments, such as walls and ceilings, which are assumed to remain static. Using this map as input, the KMCL algorithm employs feature-based visual odometry as the particle propagation mechanism and utilizes the 3-D map and the underlying sensor image formation model to efficiently simulate RGB-D camera views at the location of particle poses, using a graphical processing unit (GPU). The generated 3D views of the scene are then used to evaluate the likelihood of the particle poses. This GPU implementation provides a factor of ten speedup over a pure distance-based method, yet provides comparable accuracy. Experimental results are presented for five different configurations, including: (1) a robotic wheelchair, (2) a sensor mounted on a person, (3) an Ascending Technologies quadrotor, (4) a Willow Garage PR2, and (5) an RWI B21 wheeled mobile robot platform. The results demonstrate that the system can perform robust localization with 3D information for motions as fast as 1.5 meters per second. The approach is designed to be applicable not just for robotics but other applications such as wearable computing.
  • Robust Egomotion Estimation Using ICP in Inverse Depth Coordinates Authors: Lui, Wen Lik Dennis; Tang, Titus Jia Jie; Drummond, Tom; Li, Wai Ho
    This paper presents a 6 degrees of freedom egomotion estimation method using Iterative Closest Point (ICP) for low cost and low accuracy range cameras such as the Microsoft Kinect. Instead of Euclidean coordinates, the method uses inverse depth coordinates which better conforms to the error characteristics of raw sensor data. Novel inverse depth formulations of point-to-point and point-to-plane error metrics are derived as part of our implementation. The implemented system runs in real time at an average of 28 frames per second (fps) on a standard computer. Extensive experiments were performed to evaluate different combinations of error metrics and parameters. Results show that our system is accurate and robust across a variety of motion trajectories. The point-to-plane error metric was found to be the best at coping with large inter-frame motion while remaining accurate and maintaining real time performance.
  • Online Egomotion Estimation of RGB-D Sensors Using Spherical Harmonics Authors: Osteen, Philip; Owens, Jason; Kessens, Chad C.
    We present a technique to estimate the egomotion of an RGB-D sensor based on rotations of functions defined on the unit sphere. In contrast to traditional approaches, our technique is not based on image features and does not require correspondences to be generated between frames of data. Instead, consecutive functions are correlated using spherical harmonic analysis. An Extended Gaussian Image (EGI), created from the local normal estimates of a point cloud, defines each function. Correlations are efficiently computed using Fourier transformations, resulting in a 3 Degree of Freedom (3-DoF) rotation estimate. An Iterative Closest Point (ICP) process then refines the initial rotation estimate and adds a translational component, yielding a full 6-DoF egomotion estimate. The focus of this work is to investigate the merits of using spherical harmonic analysis for egomotion estimation by comparison with alternative 6-DoF methods. We compare the performance of the proposed technique with that of stand-alone ICP and image feature based methods. As with other egomotion techniques, estimation errors accumulate and degrade results, necessitating correction mechanisms for robust localization. For this report, however, we use the raw estimates; no filtering or smoothing processes are applied. In-house and external benchmark data sets are analyzed for both runtime and accuracy. Results show that the algorithm is competitive in terms of both accuracy and runtime, and future work will aim to
  • Incremental Registration of RGB-D Images Authors: Dryanovski, Ivan; Jaramillo, Carlos; Xiao, Jizhong
    An RGB-D camera is a sensor which outputs range and color information about objects. Recent technological advances in this area have introduced affordable RGB-D devices in the robotics community. In this paper, we present a real-time technique for 6-DoF camera pose estimation through the incremental registration of RGB-D images. First, a set of edge features are computed from the depth and color images. An initial motion estimation is calculated through aligning the features. This initial guess is refined by applying the Iterative Closest Point algorithm on the dense point cloud data. A rigorous error analysis assesses several sets of RGB-D ground truth data via an error accumulation metric. We show that the proposed two-stage approach significantly reduces error in the pose estimation, compared to a state-of-the-art ICP registration technique.
  • An Evaluation of the RGB-D SLAM System Authors: Endres, Felix; Hess, Juergen Michael; Engelhard, Nikolas; Sturm, Jürgen; Cremers, Daniel; Burgard, Wolfram
    We present an approach to simultaneous localization and mapping (SLAM) for RGB-D cameras like the Microsoft Kinect. Our system concurrently estimates the trajectory of a hand-held Kinect and generates a dense 3D model of the environment. We present the key features of our approach and evaluate its performance thoroughly on a recently published dataset, including a large set of sequences of different scenes with varying camera speeds and illumination conditions. In particular, we evaluate the accuracy, robustness, and processing time for three different feature descriptors (SIFT, SURF, and ORB). The experiments demonstrate that our system can robustly deal with difficult data in common indoor scenarios while being fast enough for online operation. Our system is fully available as open-source.
  • Depth Camera Based Indoor Mobile Robot Localization and Navigation Authors: Biswas, Joydeep; Veloso, Manuela
    The sheer volume of data generated by depth cameras provides a challenge to process in real time, in particular when used for indoor mobile robot localization and navigation. We introduce the Fast Sampling Plane Filtering (FSPF) algorithm to reduce the volume of the 3D point cloud by sampling points from the depth image, and classifying local grouped sets of points as belonging to planes in 3D (the "plane filtered" points) or points that do not correspond to planes within a specified error margin (the "outlier" points). We then introduce a localization algorithm based on an observation model that down-projects the plane filtered points on to 2D, and assigns correspondences for each point to lines in the 2D map. The full sampled point cloud (consisting of both plane filtered as well as outlier points) is processed for obstacle avoidance for autonomous navigation. All our algorithms process only the depth information, and do not require additional RGB data. The FSPF, localization and obstacle avoidance algorithms run in real time at full camera frame rates(30Hz) with low CPU requirements(16%). We provide experimental results demonstrating the effectiveness of our approach for indoor mobile robot localization and navigation. We further compare the accuracy and robustness in localization using depth cameras with FSPF vs. alternative approaches that simulate laser rangefinder scans from the 3D data.