TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Grasping: Learning and Estimation

  • End-To-End Dexterous Manipulation with Deliberate Interactive Estimation Authors: Hudson, Nicolas; Howard, Tom; Ma, Jeremy; Jain, Abhinandan; Bajracharya, Max; Myint, Steven; Matthies, Larry; Backes, Paul; Hebert, Paul; Fuchs, Thomas; Burdick, Joel
    This paper presents a model based approach to autonomous dexterous manipulation, developed as part of the DARPA Autonomous Robotic Manipulation (ARM) program. The developed autonomy system uses robot, object, and environment models to identify and localize objects, and well as plan and execute required manipulation tasks. Deliberate interaction with objects and the environment increases system knowledge about the combined robot and environmental state, enabling high precision tasks such as key insertion to be performed in a consistent framework. This approach has been demonstrated across a wide range of manipulation tasks, and is the leading manipulation approach in independent DARPA testing.
  • Template-Based Learning of Grasp Selection Authors: Herzog, Alexander; Pastor, Peter; Kalakrishnan, Mrinal; Righetti, Ludovic; Asfour, Tamim; Schaal, Stefan
    The ability to grasp unknown objects is an important skill for personal robots, which has been addressed by many present and past research projects. A crucial aspect of grasping is choosing an appropriate grasp configuration, i.e. the 6d pose of the hand relative to the object and its finger configuration. Finding feasible grasp configurations for novel objects, however, is challenging because of the huge variety in shape and size of these objects and the specific kinematics of the robotic hand in use. In this paper, we introduce a new grasp selection algorithm able to find object grasp poses based on previously demonstrated grasps. Assuming that objects with similar shapes can be grasped in a similar way, we associate to each demonstrated grasp a grasp template. The template is a local shape descriptor for a possible grasp pose and is constructed using 3d information from depth sensors. For each new object to grasp, the algorithm then finds the best grasp candidate in the library of templates. The grasp selection is also able to improve over time using the information of previous grasp attempts to adapt the ranking of the templates. We tested the algorithm on two different platforms, the Willow Garage PR2 and the Barrett WAM arm which have very different hands. Our results show that the algorithm is able to find good grasp configurations for a large set of objects from a relatively small set of demonstrations, and does indeed improve its performance over time.
  • Learning Hardware Agnostic Grasps for a Universal Jamming Gripper Authors: Jiang, Yun; Amend, John; Lipson, Hod; Saxena, Ashutosh
    Grasping has been studied from various perspectives including planning, control, and learning. In this paper, we take a learning approach to predict successful grasps for a universal jamming gripper. A jamming gripper is comprised of a flexible membrane filled with granular material, and it can quickly harden or soften to grip objects of varying shape by modulating the air pressure within the membrane. Although this gripper is easy to control, it is difficult to develop a physical model of its gripping mechanism because it undergoes significant deformation during use. Thus, many grasping approaches based on physical models (such as based on form- and force-closure) would be challenging to apply to a jamming gripper. Here we instead use a supervised learning algorithm and design both visual and shape features for capturing the property of good grasps. We show that given a RGB image and a point cloud of the target object, our algorithm can predict successful grasps for the jamming gripper without requiring a physical model. It can therefore be applied to both a parallel plate gripper and a jamming gripper without modification. We demonstrate that our learning algorithm enables both grippers to pick up a wide variety of objects, and through robotic experiments we are able to define the type of objects each gripper is best suited for handling.
  • Learning Grasp Stability Authors: Dang, Hao; Allen, Peter
    We deal with the problem of blind grasping where we use tactile feedback to predict the stability of a robotic grasp given no visual or geometric information about the object being grasped. We first simulated tactile feedback using a soft finger contact model in GraspIt! and computed tactile contacts of thousands of grasps with a robotic hand using the Columbia Grasp Database. We used the K-means clustering method to learn a contact dictionary from the tactile contacts, which is a codebook that models the contact space. The feature vector for a grasp is a histogram computed based on the distribution of its contacts over the contact space defined by the dictionary. An SVM is then trained to predict the stability of a robotic grasp given this feature vector. Experiments indicate that this model which requires low-dimension feature input is useful in predicting the stability of a grasp.
  • Learning to Slide a Magnetic Card through a Card Reader Authors: Sukhoy, Vladimir; Georgiev, Veselin; Wegter, Todd; Sweidan, Ramy; Stoytchev, Alexander
    This paper describes a set of experiments in which an upper-torso humanoid robot learned to slide a card through a card reader. The small size and the flexibility of the card presented a number of manipulation challenges for the robot. First, because most of the card is occluded by the card reader and the robot's hand during the sliding process, visual feedback is useless for this task. Second, because the card bends easily, it is difficult to distinguish between bending and hitting an obstacle in order to correct the sliding trajectory. To solve these manipulation challenges this paper proposes a method for constraint detection that uses only proprioceptive data. The method uses dynamic joint torque thresholds that are calibrated using the robot's movements in free space. The experimental results show that using this method the robot can detect when the movement of the card is constrained and modify the sliding trajectory in real time, which makes solving this task possible.