TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Marine Robotics I

  • Towards Improving Mission Execution for Autonomous Gliders with an Ocean Model and Kalman Filter Authors: Smith, Ryan N.; Kelly, Jonathan; Sukhatme, Gaurav
    Effective execution of a planned path by an underwater vehicle is important for proper analysis of the gathered science data, as well as to ensure the safety of the vehicle during the mission. Here, we propose the use of an unscented Kalman filter to aid in determining how the planned mission is executed. Given a set of waypoints that define a planned path and a dicretization of the ocean currents from a regional ocean model, we present an approach to determine the time interval at which the glider should surface to maintain a prescribed tracking error, while also limiting its time on the ocean surface. We assume practical mission parameters provided from previous field trials for the problem set up, and provide the simulated results of the Kalman filter mission planning approach. The results are initially compared to data from prior field experiments in which an autonomous glider executed the same path without pre-planning. Then, the results are validated through field trials with multiple autonomous gliders implementing different surfacing intervals simultaneously while following the same path.
  • Position and Velocity Filters for Intervention AUVs Based on Single Range and Depth Measurements Authors: Viegas, Daniel; Batista, Pedro; Oliveira, Paulo; Silvestre, Carlos
    This paper proposes novel cooperative navigation solutions for an Intervention Autonomous Underwater Vehicle (I-AUV) working in tandem with an Autonomous Surface Craft (ASC). The I-AUV is assumed to be moving in the presence of constant unknown ocean currents, and aims to estimate its position relying on measurements of its range to the ASC and of its depth relatively to the sea level. Two different scenarios are considered: in one, the ASC transmits its position and velocity to the I-AUV, while in the other the ASC transmits only its position, and the I-AUV has access to measurements of its velocity relative to the ASC. A sufficient condition for observability and a method for designing state observers with Globally Asymptotically Stable (GAS) error dynamics are presented for both problems. Finally, simulation results are included and discussed to assess the performance of the proposed solutions in the presence of measurement noise.
  • Uncertainty-Driven View Planning for Underwater Inspection Authors: Hollinger, Geoffrey; Englot, Brendan; Hover, Franz; Mitra, Urbashi; Sukhatme, Gaurav
    We discuss the problem of inspecting an underwater structure, such as a submerged ship hull, with an autonomous underwater vehicle (AUV). In such scenarios, the goal is to construct an accurate 3D model of the structure and to detect any anomalies (e.g., foreign objects or deformations). We propose a method for constructing 3D meshes from sonar-derived point clouds that provides watertight surfaces, and we introduce uncertainty modeling through non-parametric Bayesian regression. Uncertainty modeling provides novel cost functions for planning the path of the AUV to minimize a metric of inspection performance. We draw connections between the resulting cost functions and submodular optimization, which provides insight into the formal properties of active perception problems. In addition, we present experimental trials that utilize profiling sonar data from ship hull inspection.
  • Formation Control of Underactuated Autonomous Surface Vessels Using Redundant Manipulator Analogs Authors: Bishop, Bradley
    In this paper, we present a method utilizing redundant manipulator analogs for formation control of underactuated autonomous surface vessels (ASVs) with realistic turning constraints and dynamics. The method used relies on casting the swarm as a single entity and utilizing redundant manipulator techniques to guarantee task-level formation control as well as obstacle avoidance and secondary tasks such as mean position control. The method presented differs from other approaches in that the units herein represent a larger class of ASVs with realistic limitations on vessel motions and that the exact position of each of the units on the formation profile is not specified.
  • Delayed State Information Filter for USBL-Aided AUV Navigation Authors: Ribas, David; Ridao, Pere; Mallios, Angelos; Palomeras, Narcis
    This paper presents a navigation system for an Autonomous Underwater Vehicle (AUV) which merges standard dead reckoning navigation data with absolute position fixes from an Ultra-Short Base Line (USBL) system. Traditionally, the USBL transceiver is located on the surface, which makes necessary to feed the position fixes back to the AUV by means of an acoustic modem. An Information filter, which maintains a bounded circular buffer of past vehicle poses, is in charge of the sensor data fusion while dealing with de delays induced by the acoustic communication. The method is validated using a data set gathered for a dam inspection task.
  • Miniature Underwater Glider: Design, Modeling, and Experimental Results Authors: Zhang, Feitian; Thon, John; Thon, Cody; Tan, Xiaobo
    The concept of gliding robotic fish combines gliding and fin-actuation mechanisms to realize energy-efficient locomotion and high maneuverability, and holds strong promise for mobile sensing in versatile aquatic environments. In this paper we present the modeling and design of a miniature fish-like glider, a key enabling component for gliding robotic fish. The full dynamics of the glider is first derived and then reduced to the sagittal plane, where the lift, drag, and pitch moment coefficients are obtained as linear or quadratic functions of the attack angle based on computational fluid dynamics (CFD) analysis. The model is used to design the glider by accommodating stringent constraints on dimensions yet meeting the desired specification on speed. A fully untethered prototype of underwater glider is developed, with a weight of 4 kg and length of 40 cm. With a net buoyancy of 20 g, it realizes a steady gliding speed of 20 cm/s. The volume and net buoyancy of this glider are less than 10% and 5%, respectively, of those of reported gliders in the literature, and its speed per unit net buoyancy is over 9 times of those other vehicles. Experimental results have shown that the model is able to capture well both the steady glide behavior under different control inputs, and the dynamics during transients.

Autonomy and Vision for UAVs

  • Cooperative Vision-Aided Inertial Navigation Using Overlapping Views Authors: Melnyk, Igor; Hesch, Joel; Roumeliotis, Stergios
    In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both robots. In contrast to existing CL methods, which require distance and/or bearing robot-to-robot observations, our algorithm infers the relative position and orientation (pose) of the robots using only the visual observations of common features in the scene. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.
  • UAV Vision: Feature Based Accurate Ground Target Localization through Propagated Initializations and Interframe Homographies Authors: Han, Kyuseo; Aeschliman, Chad; Park, Johnny; Kak, Avinash; Kwon, Hyukseong; Pack, Daniel
    Our work presents solutions to two related vexing problems in feature-based localization of ground targets in Unmanned Aerial Vehicle(UAV) images: (i) A good initial guess at the pose estimate that would speed up the convergence to the final pose estimate for each image frame in a video sequence; and (ii) Time-bounded estimation of the position of the ground target. We address both these problems within the framework of the ICP (Iterative Closest Point) algorithm that now has a rich tradition of usage in computer vision and robotics applications. We solve the first of the two problems by frame-to-frame propagation of the computed pose estimates for the purpose of the initializations needed by ICP. The second problem is solved by terminating the iterative estimation process at the expiration of the available time for each image frame. We show that when frame-to-frame homography is factored into the iterative calculations, the accuracy of the position calculated at the time of bailing out of the iterations is nearly always sufficient for the goals of UAV vision.
  • First Results in Autonomous Landing and Obstacle Avoidance by a Full-Scale Helicopter Authors: Scherer, Sebastian; Chamberlain, Lyle; Singh, Sanjiv
    Currently deployed unmanned rotorcraft rely on carefully preplanned missions and operate from prepared sites and thus avoid the need to perceive and react to the environment. Here we consider the problems of finding suitable but previously unmapped landing sites given general coordinates of the goal and planning collision free trajectories in real time to land at the “optimal” site. This requires accurate mapping, fast landing zone evaluation algorithms, and motion planning. We report here on the sensing, perception and motion planning integrated onto a full-scale helicopter that flies completely autonomously. We show results from 8 experiments for landing site selection and 5 runs at obstacles. These experiments have demonstrated the first autonomous full-scale helicopter that successfully selects its own landing sites and avoids obstacles.
  • Real-Time Onboard Visual-Inertial State Estimation and Self-Calibration of MAVs in Unknown Environments Authors: Weiss, Stephan; Achtelik, Markus W.; Lynen, Simon; Chli, Margarita; Siegwart, Roland
    The combination of visual and inertial sensors has proved to be very popular in MAV navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. We propose a navigation algorithm for MAVs equipped with a single camera and an IMU which is able to run onboard and in real-time. The main focus is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40 Hz on an onboard Atom computer 1.6 GHz.
  • Autonomous Landing of a VTOL UAV on a Moving Platform Using Image-Based Visual Servoing Authors: Lee, Daewon; Ryan, Tyler; Kim, H. Jin
    In this paper we describe a vision-based algorithm to control a vertical-takeoff-and-landing unmanned aerial vehicle while tracking and landing on a moving platform. Specifically, we use image-based visual servoing (IBVS) to track the platform in two-dimensional image space and generate a velocity reference command used as the input to an adaptive sliding mode controller. Compared with other vision-based control algorithms that reconstruct a full three-dimensional representation of the target, which requires precise depth estimation, IBVS is computationally cheaper since it is less sensitive to the depth estimation allowing for a faster method to obtain this estimate. To enhance velocity tracking of the sliding mode controller, an adaptive rule is described to account for the ground effect experienced during the maneuver. Finally, the IBVS algorithm integrated with the adaptive sliding mode controller for tracking and landing is validated in an experimental setup using a quadrotor.

Sensing for manipulation

  • Using Depth and Appearance Features for Informed Robot Grasping of Highly Wrinkled Clothes Authors: Ramisa, Arnau; Alenyà, Guillem; Moreno-Noguer, Francesc; Torras, Carme
    Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. In order to handle the large variability a deformed cloth may have, we build a Bag of Features based detector that combines appearance and 3D geometry features. An image is scanned using a sliding window with a linear classifier, and the candidate windows are refined using a non-linear SVM and a "grasp goodness" criterion to select the best grasping point. We demonstrate our approach detecting collars in deformed polo shirts, using a Kinect camera. Experimental results show a good performance of the proposed method not only in identifying the same trained textile object part under severe deformations and occlusions, but also the corresponding part in other clothes, exhibiting a degree of generalization.
  • Integrating surface-based hypotheses and manipulation for autonomous segmentation and learning of object representations Authors: Ude, Ales; Schiebener, David; Morimoto, Jun
    Learning about new objects that a robot sees for the first time is a difficult problem because it is not clear how to define the concept of object in general terms. In this paper we consider as objects those physical entities that are comprised of features which move consistently when the robot acts upon them. Among the possible actions that a robot could apply to a hypothetical object, pushing seems to be the most suitable one due to its relative simplicity and general applicability. We propose a methodology to generate and apply pushing actions to hypothetical objects. A probing push causes visual features to move, which enables the robot to either confirm or reject the initial hypothesis about existence of the object. Furthermore, the robot can discriminate the object from the background and accumulate visual features that are useful for training of state of the art statistical classifiers such as bag of features.
  • From Object Categories to Grasp Transfer Using Probabilistic Reasoning Authors: Madry, Marianna; Song, Dan; Kragic, Danica
    In this paper we address the problem of grasp generation and grasp transfer between objects using categorical knowledge. The system is built upon an i)~active scene segmentation module, able of generating object hypotheses and segmenting them from the background in real time, ii)~object categorization system using integration of 2D and 3D cues, and iii)~probabilistic grasp reasoning system. Individual object hypotheses are first generated, categorized and then used as the input to a grasp generation and transfer system that encodes task, object and action properties. The experimental evaluation compares individual 2D and 3D categorization approaches with the integrated system, and it demonstrates the usefulness of the categorization in task-based grasping and grasp transfer.
  • Voting-Based Pose Estimation for Robotic Assembly Using a 3D Sensor Authors: Choi, Changhyun; Taguchi, Yuichi; Tuzel, Oncel; Liu, Ming-Yu; Ramalingam, Srikumar
    We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor.
  • Supervised Learning of Hidden and Non-Hidden 0-Order Affordances and Detection in Real Scenes Authors: Aldoma, Aitor; Tombari, Federico; Vincze, Markus
    The ability to perceive possible interactions with the environment is a key capability of task-guided robotic agents. An important subset of possible interactions depends solely on the objects of interest and their position and orientation in the scene. We call these object-based interactions $0$-order affordances and divide them among non-hidden and hidden whether the current configuration of an object in the scene renders its affordance directly usable or not. Conversely to other works, we propose that detecting affordances that are not directly perceivable increase the usefulness of robotic agents with manipulation capabilities, so that by appropriate manipulation they can modify the object configuration until the seeked affordance becomes available. In this paper we show how $0$-order affordances depending on the geometry of the objects and their pose can be learned using a supervised learning strategy on 3D mesh representations of the objects allowing the use of the whole object geometry. Moreover, we show how the learned affordances can be detected in real scenes obtained with a low-cost depth sensor like the Microsoft Kinect through object recognition and 6D0F pose estimation and present results for both learning on meshes and detection on real scenes to demonstrate the practical application of the presented approach.
  • Estimating Object Grasp Sliding Via Pressure Array Sensing Authors: Alcazar, Javier Adolfo; Barajas, Leandro
    Advances in design and fabrication technologies are enabling the production and commercialization of sensor-rich robotic hands with skin-like sensor arrays. Robotic skin is poised to become a crucial interface between the robot embodied intelligence and the external world. The need to fuse and make sense out of data extracted from skin-like sensors is readily apparent. This paper presents a real-time sensor fusion algorithm that can be used to accurately estimate object position, translation and rotation during grasping. When an object being grasped moves across the sensor array, it creates a sliding sensation; the spatial-temporal sensations are estimated by computing localized slid vectors using an optical flow approach. These results were benchmarked against an L-inf Norm approach using a nominal known object trajectory generated by sliding and rotating an object over the sensor array using a second, high accuracy, industrial robot. Rotation and slid estimation can later be used to improve grasping quality and dexterity