TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Grasping: Learning and Estimation

  • End-To-End Dexterous Manipulation with Deliberate Interactive Estimation Authors: Hudson, Nicolas; Howard, Tom; Ma, Jeremy; Jain, Abhinandan; Bajracharya, Max; Myint, Steven; Matthies, Larry; Backes, Paul; Hebert, Paul; Fuchs, Thomas; Burdick, Joel
    This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and is the leading manipulation approach in independent DARPA testing.
  • Template-Based Learning of Grasp Selection Authors: Herzog, Alexander; Pastor, Peter; Kalakrishnan, Mrinal; Righetti, Ludovic; Asfour, Tamim; Schaal, Stefan
    The ability to grasp unknown objects is an important skill for personal robots, which has been addressed by many present and past research projects. A crucial aspect of grasping is choosing an appropriate grasp configuration, i.e. the 6d pose of the hand relative to the object and its finger configuration. Finding feasible grasp configurations for novel objects, however, is challenging because of the huge variety in shape and size of these objects and the specific kinematics of the robotic hand in use. In this paper, we introduce a new grasp selection algorithm able to find object grasp poses based on previously demonstrated grasps. Assuming that objects with similar shapes can be grasped in a similar way, we associate to each demonstrated grasp a grasp template. The template is a local shape descriptor for a possible grasp pose and is constructed using 3d information from depth sensors. For each new object to grasp, the algorithm then finds the best grasp candidate in the library of templates. The grasp selection is also able to improve over time using the information of previous grasp attempts to adapt the ranking of the templates. We tested the algorithm on two different platforms, the Willow Garage PR2 and the Barrett WAM arm which have very different hands. Our results show that the algorithm is able to find good grasp configurations for a large set of objects from a relatively small set of demonstrations, and does indeed improve its performance over time.
  • Learning Hardware Agnostic Grasps for a Universal Jamming Gripper Authors: Jiang, Yun; Amend, John; Lipson, Hod; Saxena, Ashutosh
    Grasping has been studied from various perspectives including planning, control, and learning. In this paper, we take a learning approach to predict successful grasps for a universal jamming gripper. A jamming gripper is comprised of a flexible membrane filled with granular material, and it can quickly harden or soften to grip objects of varying shape by modulating the air pressure within the membrane. Although this gripper is easy to control, it is difficult to develop a physical model of its gripping mechanism because it undergoes significant deformation during use. Thus, many grasping approaches based on physical models (such as based on form- and force-closure) would be challenging to apply to a jamming gripper. Here we instead use a supervised learning algorithm and design both visual and shape features for capturing the property of good grasps. We show that given a RGB image and a point cloud of the target object, our algorithm can predict successful grasps for the jamming gripper without requiring a physical model. It can therefore be applied to both a parallel plate gripper and a jamming gripper without modification. We demonstrate that our learning algorithm enables both grippers to pick up a wide variety of objects, and through robotic experiments we are able to define the type of objects each gripper is best suited for handling.
  • Learning Grasp Stability Authors: Dang, Hao; Allen, Peter
    We deal with the problem of blind grasping where we use tactile feedback to predict the stability of a robotic grasp given no visual or geometric information about the object being grasped. We first simulated tactile feedback using a soft finger contact model in GraspIt! and computed tactile contacts of thousands of grasps with a robotic hand using the Columbia Grasp Database. We used the K-means clustering method to learn a contact dictionary from the tactile contacts, which is a codebook that models the contact space. The feature vector for a grasp is a histogram computed based on the distribution of its contacts over the contact space defined by the dictionary. An SVM is then trained to predict the stability of a robotic grasp given this feature vector. Experiments indicate that this model which requires low-dimension feature input is useful in predicting the stability of a grasp.
  • Learning to Slide a Magnetic Card through a Card Reader Authors: Sukhoy, Vladimir; Georgiev, Veselin; Wegter, Todd; Sweidan, Ramy; Stoytchev, Alexander
    This paper describes a set of experiments in which an upper-torso humanoid robot learned to slide a card through a card reader. The small size and the flexibility of the card presented a number of manipulation challenges for the robot. First, because most of the card is occluded by the card reader and the robot's hand during the sliding process, visual feedback is useless for this task. Second, because the card bends easily, it is difficult to distinguish between bending and hitting an obstacle in order to correct the sliding trajectory. To solve these manipulation challenges this paper proposes a method for constraint detection that uses only proprioceptive data. The method uses dynamic joint torque thresholds that are calibrated using the robot's movements in free space. The experimental results show that using this method the robot can detect when the movement of the card is constrained and modify the sliding trajectory in real time, which makes solving this task possible.

Non-Holonomic Motion Planning

  • Model Predictive Navigation for Position and Orientation Control of Nonholonomic Vehicles Authors: Karydis, Konstantinos; Valbuena, Luis; Tanner, Herbert G.
    In this paper we consider a nonholonomic system in the form of a unicycle and steer it to the origin so that both position and orientation converge to zero while avoiding obstacles. We introduce an artificial reference field, propose a discontinuous control policy consisting of a receding horizon strategy and implement the resulting field-based controller in a way that theoretically guarantees for collision avoidance; convergence of both position and orientation can also be established. The analysis integrates an invariance principle for differential inclusions with model predictive control. In this approach there is no need for the terminal cost in receding horizon optimization to be a positive definite function.
  • Regularity Properties and Deformation of Wheeled Robots Trajectories Authors: Pham, Quang-Cuong; Nakamura, Yoshihiko
    Our contribution in this article is twofold. First, we identify the regularity properties of the trajectories of planar wheeled mobile robots. By regularity properties of a trajectory we mean whether this trajectory, or a function computed from it, belongs to a certain class <i>C<sup>n</sup></i> (the class of functions that are differentiable <i>n</i> times with a continuous <i>n</i><sup>th</sup> derivative). We show that, under some generic assumptions about the rotation and steering velocities of the wheels, any non-degenerate wheeled robot belongs to one of the two following classes. Class I comprises those robots whose admissible trajectories in the plane are <i>C</i><sup>1</sup> and piecewise <i>C</i><sup>2</sup>; and class II comprises those robots whose admissible trajectories are <i>C</i><sup>1</sup>, piecewise <i>C</i><sup>2</sup> and, in addition, curvature-continuous. Second, based on this characterization, we derive new feedback control and gap filling algorithms for wheeled mobile robots using the recently-developed affine trajectory deformation framework.
  • A Homicidal Differential Drive Robot Authors: Ruiz, Ubaldo; Murrieta-Cid, Rafael
    In this paper, we consider the problem of capturing an omnidirectional evader using a Differential Drive Robot in an obstacle free environment. At the beginning of the game the evader is at a distance L>l from the pursuer. The pursuer goal is to get closer from the evader than the capture distance l. The goal of the evader is to keep the pursuer at all time farther from it than this capture distance. In this paper, we found closed-form representations of the motion primitives and time-optimal strategies for each player. These strategies are in Nash Equilibrium, meaning that any unilateral deviation of each player from these strategies does not provide to such player benefit toward the goal of winning the game. We also present the condition defining the winner of the game and we construct a solution over the entire reduced space.
  • On the Dynamic Model and Motion Planning for a Class of Spherical Rolling Robots Authors: Svinin, Mikhail; Yamamoto, Motoji
    The paper deals with the dynamics and motion planning for a spherical rolling robot actuated by internal rotors that are placed on orthogonal axes. The driving principle for such a robot exploits non-holonomic constraints to propel the rolling carrier. The full mathematical model as well as its reduced version are derived, and the inverse dynamics is addressed. It is shown that if the rotors are mounted on three orthogonal axes, any feasible kinematic trajectory of the rolling robot is dynamically realizable. For the case of only two orthogonal axes of the actuation the condition of dynamic realizability of a feasible kinematic trajectory is established. The implication of this condition to motion planning in dynamic formulation is explored under a case study. It is shown there that in maneuvering the robot by tracing circles on the sphere surface the dynamically realizable trajectories are essentially different from those resulted from kinematic models.
  • Control of Nonprehensile Rolling Manipulation: Balancing a Disk on a Disk Authors: Ryu, Ji-Chul; Ruggiero, Fabio; Lynch, Kevin
    This paper presents stabilization control of a rolling manipulation system called the disk-on-disk. The system consists of two disks in which the upper disk (object) is free to roll on the lower disk (hand) under the influence of gravity. The goal is to stabilize the object at the unstable upright position directly above the hand. We use backstepping to derive a control law yielding global asymptotic stability. We present simulation as well as experimental results demonstrating the controller.
  • Estimating Probability of Collision for Safe Motion Planning under Gaussian Motion and Sensing Uncertainty Authors: Patil, Sachin; van den Berg, Jur; Alterovitz, Ron
    We present a fast, analytical method for estimating the probability of collision of a motion plan for a mobile robot operating under the assumptions of Gaussian motion and sensing uncertainty. Estimating the probability of collision is an integral step in many algorithms for motion planning under uncertainty and is crucial for characterizing the safety of motion plans. Our method is computationally fast, enabling its use in online motion planning, and provides conservative estimates to promote safety. To improve accuracy, we use a novel method to truncate estimated a priori state distributions to account for the fact that the probability of collision at each stage along a plan is conditioned on the previous stages being collision free. Our method can be directly applied within a variety of existing motion planners to improve their performance and the quality of computed plans. We apply our method to a car-like mobile robot with second order dynamics and to a steerable medical needle in 3D and demonstrate that our method for estimating the probability of collision is orders of magnitude faster than naive Monte Carlo sampling methods and reduces estimation error by more than 25% compared to prior methods.

Calibration and Identification

  • Automatic Camera and Range Sensor Calibration using a single Shot Authors: Geiger, Andreas; Moosmann, Frank; Car, Ömer; Schuster, Bernhard
    As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions.
  • Scale-Only Visual Homing from an Omnidirectional Camera Authors: Pradalier, Cedric; Liu, Ming; Pomerleau, Francois; Siegwart, Roland
    Visual Homing is the process by which a mobile robot moves to a Home position using only information extracted from visual data. The approach we present in this paper uses image keypoints (e.g. SIFT) extracted from omnidirectional images and matches the current set of keypoints with the set recorded at the Home location. In this paper, we first formulate three different visual homing problems using uncalibrated omnidirectional camera within the Image Based Visual Servoing (IBVS) framework; then we propose a novel simplified homing approach, which is inspired by IBVS, based only on the scale information of the SIFT features, with its computational cost linear to the number of features. This paper reports on the application of our method on a commonly cited indoor database where it outperforms other approaches. We also briefly present results on a real robot and allude on the integration into a topological navigation framework.
  • 3D Monocular Robotic Ball Catching with an Iterative Trajectory Estimation Refinement Authors: Lippiello, Vincenzo; Ruggiero, Fabio
    In this paper, a 3D robotic ball catching algorithm which employs only an eye-in-hand monocular visual-system is presented. A partitioned visual servoing control is used in order to generate the robot motion, keeping always the ball in the field of view of the camera. When the ball is detected, the camera mounted on the robot end-effector is commanded to follow a suitable baseline in order to acquire measurements and provide a first possible interception point through a linear estimation process. Thereafter, further visual measures are acquired in order to continuously refine the previous prediction through a non-linear estimation process. Experimental results show the effectiveness of the proposed solution.
  • Automatically Calibrating the Viewing Direction of Optic-Flow Sensors Authors: Briod, Adrien; Zufferey, Jean-Christophe; Floreano, Dario
    Because of their low weight, cost and energy consumption, optic-flow sensors attract growing interest in robotics for tasks such as self-motion estimation or depth measurement. Most applications require a large number of these sensors, which involves a fair amount of calibration work for each setup. In particular, the viewing direction of each sensor has to be measured for proper operation. This task is often cumbersome and prone to errors, and has to be carried out every time the setup is slightly modified. This paper proposes an algorithm for viewing direction calibration relying on rate gyroscope readings and a recursive weighted linear least square estimation of the rotation matrix elements. %an iterative algorithm. The method only requires the user to realize random rotational motions of its setup by hand. The algorithm provides hints about the current precision of the estimation and what motions should be performed to improve it. To assess the validity of the method, tests were performed on an experimental setup and the results compared to a precise manual calibration. The repeatability of the gyroscope-based calibration process reached +-1.7° per axis.
  • An Analytical Least-Squares Solution to the Odometer-Camera Extrinsic Calibration Problem Authors: Guo, Chao; Mirzaei, Faraz; Roumeliotis, Stergios
    In order to fuse camera and odometer measurements, we first need to estimate their relative transformation through the so-called odometer-camera extrinsic calibration. In this paper, we present a two-step analytical least-squares solution for the extrinsic odometer-camera calibration that (i) is not iterative and finds the least-squares optimal solution without any initialization, and (ii) does not require any special hardware or the presence of known landmarks in the scene. Specifically, in the first step, we estimate a subset of the 3D relative rotation parameters by analytically minimizing a least-squares cost function. We then back-substitute these estimates in the measurement constraints, and determine the rest of the 3D transformation parameters by analytically minimizing a second least-squares cost function. Simulation and experimental results are presented to validate the efficiency of the proposed algorithm.
  • Online Calibration of Vehicle Powertrain and Pose Estimation Parameters Using Integrated Dynamics Authors: Seegmiller, Neal Andrew; Kelly, Alonzo; Rogers-Marcovitz, Forrest
    This paper presents an online approach to calibrating vehicle model parameters that uses the integrated dynamics of the system. Specifically, we describe the identification of the time constant and delay in a first-order model of the vehicle powertrain, as well as parameters required for pose estimation (including position offsets for the inertial measurement unit, steer angle sensor parameters, and wheel radius). Our approach does not require differentiation of state measurements like classical techniques; making it ideal when only low-frequency measurements are available. Experimental results on the LandTamer and Zoe rover platforms show online calibration using integrated dynamics to be fast and more accurate than both manual and classical calibration methods.