TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Animation & Simulation

  • Conditions for Uniqueness in Simultaneous Impact with Application to Mechanical Design Authors: Seghete, Vlad; Murphey, Todd
    We present a collision resolution method based on momentum maps and show how it extends to handling multiple simultaneous collisions. Simultaneous collisions, which are common in robots that walk or climb, do not necessarily have unique outcomes, but we show that for special configurations—--e.g. when the surfaces of contact are orthogonal in the appropriate sense—--simultaneous impacts have unique outcomes, making them considerably easier to understand and simulate. This uniqueness helps us develop a measure of the unpredictability of the impact outcome based on the state at impact and is used for gait and mechanism design, such that a mechanism’s actions are more predictable and hence controllable. As a preliminary example, we explore the configuration space at impact for a model of the RHex running robot and find optimal configurations at which the unpredictability of the impact outcome is minimized.
  • Dynamics Simulation for the Training of Teleoperated Retrieval of Spent Nuclear Fuel Authors: Cornella, Jordi; Zerbato, Davide; Giona, Luca; Fiorini, Paolo; Sequeira, Vitor
    This paper addresses the problem of training of operators for telemanipulation tasks. In particular it describes the development of a physics based virtual environment that allows a user to train in the control of an innovative robotic tools designed for the retrieval of spent nuclear fuels. The robotic device is designed to adapt to very different environments, at the cost of an increased complexity in its control. The virtual environment provides realistic simulation of robot dynamics. The two most challenging tasks related to robot control have been identified and implemented in the simulation, leading to an effective tool for the training. The developed application is described in details and the outcome of one simulated intervention is proposed and analyzed in terms of user interaction and realism.
  • Putting the Fish in the Fish Tank: Immersive VR for Animal Behavior Experiments Authors: Butail, Sachit; Paley, Derek; Chicoli, Amanda
    We describe a virtual-reality framework for investigating startle-response behavior in fish. Using real-time three-dimensional tracking, we generate looming stimuli at a specific location on a computer screen, such that the shape and size of the looming stimuli change according to the fish's perspective and location in the tank. We demonstrate the effectiveness of the setup through experiments on Giant danio and compute the success rate in eliciting a response. We also estimate visual startle sensitivity by presenting the stimulus from different directions around the fish head. The aim of this work is to provide the basis for quantifying escape behavior in fish schools.
  • Design and Implementation of Dynamic Simulators for the Testing of Inertial Sensors Authors: allotta, benedetto; Becciolini, Lorenzo; Costanzi, Riccardo; Giardi, Francesca; Ridolfi, Alessandro; Vettori, Gregorio
    Many dynamic simulators have been developed in the last thirty years for different types of vehicles. Flight simulators and drive simulators are very well known examples. This paper describes the design and implementation of a dynamic simulator for the testing of inertial sensors devoted to vehicle navigation through a Hardware-In-The-Loop test rig composed of an industrial robot and a commercially available Inertial Measurement Unit (IMU). The authors are developing an innovative localization algorithm for railway vehicles which integrates inertial sensors with tachometers. The opportunity to set up a testing simulator capable of replicating in a realistic fashion the dynamic effects of the vehicle motion on inertial sensors allows to avoid expensive on board acquisitions and to speed up algorithm tuning. The real-time control architecture featured by the available industrial robot allows to precisely specify and execute motion trajectories with tight path and time law constraints required by the application at hand.
  • Automatic Data Driven Vegetation Modeling for Lidar Simulation Authors: Deschaud, Jean-Emmanuel; Prasser, David; Dias, M. Freddie; Browning, Brett; Rander, Peter
    Traditional lidar simulations render surface models to generate simulated range data. For objects with well-defined surfaces, this approach works well, and traditional 3D scene reconstruction algorithms can be employed to automatically generate the surface models. This approach breaks down, though, for many trees, tall grasses, and other objects with fine-scale geometry: surface models do not easily represent the geometry, and automated reconstruction from real data is difficult. In this paper, we introduce a new stochastic volumetric model that better captures the complexities of real lidar data of vegetation and is far better suited for automatic modeling of scenes from field collected lidar data. We also introduce several methods for automatic modeling and for simulating lidar data utilizing the new model. To measure the performance of the stochastic simulation we use histogram comparison metrics to quantify the differences between data produced by the real and simulated lidar. We evaluate our approach on a range of real world datasets and show improved fidelity for simulating geo-specific outdoor, vegetation scenes.
  • Simulation of Tactile Sensors Using Soft Contacts for Robot Grasping Applications Authors: Moisio, Sami; Leon, Beatriz; Korkealaakso, Pasi; Morales, Antonio
    In the context of robot grasping and manipulation, realistic simulation requires accurate modeling of contacts between bodies and, in a practical level, accurate simulation of touch sensors. This paper addresses the problem of simulating a tactile sensor considering soft contacts and full friction description. The developed model consists of a surface contact patch described by a mesh of contact elements. For each element, a full friction description is built considering stick-slip phenomena. The model is then implemented and used to perform typical tasks related to tactile sensors. The performance of the simulated sensor is then compared to a real one. It is also demonstrated how it can be integrated on the simulation of a complete robot grasping system.

Planning and Navigation of Biped Walking

  • Real-Time Footstep Planning for Humanoid Robots among 3D Obstacles Using a Hybrid Bounding Box Authors: Perrin, Nicolas Yves; Stasse, Olivier; Lamiraux, Florent; Kim, Young J.; Manocha, Dinesh
    In this paper we introduce a new bounding box method for footstep planning for humanoid robots. Similar to the classic bounding box method (which uses a single rectangular box to encompass the robot) it is computationally efficient, easy to implement and can be combined with any rigid body motion planning library. However, unlike the classic bounding box method, our method takes into account the stepping over capabilities of the robot, and generates precise leg trajectories to avoid obstacles on the ground. We demonstrate that this method is well suited for footstep planning in cluttered environments.
  • Foot Placement for Planar Bipeds with Point Feet Authors: van Zutven, Pieter; Kostic, Dragan; Nijmeijer, Hendrik
    When humanoid robots are going to be used in society, they should be capable to maintain the balance. Knowing where to step appears to be crucially important to remain balanced. This paper contributes the foot placement indicator (FPI), an extension to the foot placement estimator (FPE) for planar bipeds with point feet and an arbitrary number of non-massless links. The method uses conservation of energy to determine where the planar biped needs to step to remain in balance. Simulations of the FPI show improved foot placement for balance with respect to the FPE.
  • A Framework for Extreme Locomotion Planning Authors: Dellin, Christopher; Srinivasa, Siddhartha
    A person practicing parkour is an incredible display of intelligent planning; he must reason carefully about his velocity and contact placement far into the future in order to locomote quickly through an environment. We seek to develop planners that will enable robotic systems to replicate this performance. An ideal planner can learn from examples and formulate feasible full-body plans to traverse a new environment. The proposed approach uses momentum equivalence to reduce the full-body system into a simplified one. Low-dimensional trajectory primitives are then composed by a sampling planner called Sampled Composition A* to produce candidate solutions that are adjusted by a trajectory optimizer and mapped to a full-body robot. Using primitives collected from a variety of sources, this technique is able to produce solutions to an assortment of simulated locomotion problems.
  • Adaptive Level-of-Detail Planning for Efficient Humanoid Navigation Authors: Hornung, Armin; Bennewitz, Maren
    In this paper, we consider the problem of efficient path planning for humanoid robots by combining grid-based 2D planning with footstep planning. In this way, we exploit the advantages of both frameworks, namely fast planning on grids and the ability to find solutions in situations where grid-based planning fails. Our method computes a global solution by adaptively switching between fast grid-based planning in open spaces and footstep planning in the vicinity of obstacles. To decide which planning framework to use, our approach classifies the environment into regions of different complexity with respect to the traversability. Experiments carried out in a simulated office environment and with a Nao humanoid show that (i) our approach significantly reduces the planning time compared to pure footstep planning and (ii) the resulting plans are almost as good as globally computed optimal footstep paths.
  • Dominant Sources of Variability in Passive Walking Authors: Nanayakkara, Thrishantha; Byl, Katie; Liu, Hongbin; Song, Xiaojing; Villabona, Tim
    This paper investigates possible sources of variability in the dynamics of legged locomotion, even in its most idealized form. The rimless wheel model is a seemingly deterministic legged dynamic system, popular within the legged locomotion community for understanding basic collision dynamics and energetics during passive phases of walking. Despite the simplicity of this legged model, however, experimental motion capture data recording the passive step-to-step dynamics of a rimless wheel down a constant-slope terrain actually demonstrates significant variability, providing strong evidence that stochasticity is an intrinsic-and thus unavoidable-property of legged locomotion that should be modeled with care when designing reliable walking machines. We present numerical comparisons of several hypotheses as to the dominant source(s) of this variability: 1) the initial distribution of the angular velocity, 2) the uneven profile of the leg lengths and 3) the distribution of the coefficients of friction and restitution across collisions. Our analysis shows that the 3rd hypothesis most accurately predicts the noise characteristics observed in our experimental data while the 1st hypothesis is also valid for certain contexts of terrain friction. These findings suggest that variability due to ground contact dynamics, and not simply due to geometric variations more typically modeled in terrain, is important in determining the stochasticity and resulting stability of walking robots. Althou
  • First Steps Toward Underactuated Human-Inspired Bipedal Robotic Walking Authors: Ames, Aaron
    This paper presents the first steps toward going from human data to formal controller design to experimental realization in the context of underactuated bipedal robots. Specifically, by studying experimental human walking data, we find that specific outputs of the human, i.e., functions of the kinematics, appear to be canonical to walking and are all characterized by a single function of time, termed a human walking function. Using the human outputs and walking function, we design a human-inspired controller that drives the output of the robot to the output of the human as represented by the walking function. The main result of the paper is an optimization problem that determines the parameters of this controller so as to guarantee stable underactuated walking that is as "close" as possible to human walking. This result is demonstrated through the simulation of a physical underactuated 2D bipedal robot, AMBER. Experimentally implementing this control on AMBER through "feed-forward" control, i.e., trajectory tracking, repeatedly results in 5-10 steps.

Sensing for manipulation

  • Using Depth and Appearance Features for Informed Robot Grasping of Highly Wrinkled Clothes Authors: Ramisa, Arnau; Alenyà, Guillem; Moreno-Noguer, Francesc; Torras, Carme
    Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. In order to handle the large variability a deformed cloth may have, we build a Bag of Features based detector that combines appearance and 3D geometry features. An image is scanned using a sliding window with a linear classifier, and the candidate windows are refined using a non-linear SVM and a "grasp goodness" criterion to select the best grasping point. We demonstrate our approach detecting collars in deformed polo shirts, using a Kinect camera. Experimental results show a good performance of the proposed method not only in identifying the same trained textile object part under severe deformations and occlusions, but also the corresponding part in other clothes, exhibiting a degree of generalization.
  • Integrating surface-based hypotheses and manipulation for autonomous segmentation and learning of object representations Authors: Ude, Ales; Schiebener, David; Morimoto, Jun
    Learning about new objects that a robot sees for the first time is a difficult problem because it is not clear how to define the concept of object in general terms. In this paper we consider as objects those physical entities that are comprised of features which move consistently when the robot acts upon them. Among the possible actions that a robot could apply to a hypothetical object, pushing seems to be the most suitable one due to its relative simplicity and general applicability. We propose a methodology to generate and apply pushing actions to hypothetical objects. A probing push causes visual features to move, which enables the robot to either confirm or reject the initial hypothesis about existence of the object. Furthermore, the robot can discriminate the object from the background and accumulate visual features that are useful for training of state of the art statistical classifiers such as bag of features.
  • From Object Categories to Grasp Transfer Using Probabilistic Reasoning Authors: Madry, Marianna; Song, Dan; Kragic, Danica
    In this paper we address the problem of grasp generation and grasp transfer between objects using categorical knowledge. The system is built upon an i)~active scene segmentation module, able of generating object hypotheses and segmenting them from the background in real time, ii)~object categorization system using integration of 2D and 3D cues, and iii)~probabilistic grasp reasoning system. Individual object hypotheses are first generated, categorized and then used as the input to a grasp generation and transfer system that encodes task, object and action properties. The experimental evaluation compares individual 2D and 3D categorization approaches with the integrated system, and it demonstrates the usefulness of the categorization in task-based grasping and grasp transfer.
  • Voting-Based Pose Estimation for Robotic Assembly Using a 3D Sensor Authors: Choi, Changhyun; Taguchi, Yuichi; Tuzel, Oncel; Liu, Ming-Yu; Ramalingam, Srikumar
    We propose a voting-based pose estimation algorithm applicable to 3D sensors, which are fast replacing their 2D counterparts in many robotics, computer vision, and gaming applications. It was recently shown that a pair of oriented 3D points, which are points on the object surface with normals, in a voting framework enables fast and robust pose estimation. Although oriented surface points are discriminative for objects with sufficient curvature changes, they are not compact and discriminative enough for many industrial and real-world objects that are mostly planar. As edges play the key role in 2D registration, depth discontinuities are crucial in 3D. In this paper, we investigate and develop a family of pose estimation algorithms that better exploit this boundary information. In addition to oriented surface points, we use two other primitives: boundary points with directions and boundary line segments. Our experiments show that these carefully chosen primitives encode more information compactly and thereby provide higher accuracy for a wide class of industrial parts and enable faster computation. We demonstrate a practical robotic bin-picking system using the proposed algorithm and a 3D sensor.
  • Supervised Learning of Hidden and Non-Hidden 0-Order Affordances and Detection in Real Scenes Authors: Aldoma, Aitor; Tombari, Federico; Vincze, Markus
    The ability to perceive possible interactions with the environment is a key capability of task-guided robotic agents. An important subset of possible interactions depends solely on the objects of interest and their position and orientation in the scene. We call these object-based interactions $0$-order affordances and divide them among non-hidden and hidden whether the current configuration of an object in the scene renders its affordance directly usable or not. Conversely to other works, we propose that detecting affordances that are not directly perceivable increase the usefulness of robotic agents with manipulation capabilities, so that by appropriate manipulation they can modify the object configuration until the seeked affordance becomes available. In this paper we show how $0$-order affordances depending on the geometry of the objects and their pose can be learned using a supervised learning strategy on 3D mesh representations of the objects allowing the use of the whole object geometry. Moreover, we show how the learned affordances can be detected in real scenes obtained with a low-cost depth sensor like the Microsoft Kinect through object recognition and 6D0F pose estimation and present results for both learning on meshes and detection on real scenes to demonstrate the practical application of the presented approach.
  • Estimating Object Grasp Sliding Via Pressure Array Sensing Authors: Alcazar, Javier Adolfo; Barajas, Leandro
    Advances in design and fabrication technologies are enabling the production and commercialization of sensor-rich robotic hands with skin-like sensor arrays. Robotic skin is poised to become a crucial interface between the robot embodied intelligence and the external world. The need to fuse and make sense out of data extracted from skin-like sensors is readily apparent. This paper presents a real-time sensor fusion algorithm that can be used to accurately estimate object position, translation and rotation during grasping. When an object being grasped moves across the sensor array, it creates a sliding sensation; the spatial-temporal sensations are estimated by computing localized slid vectors using an optical flow approach. These results were benchmarked against an L-inf Norm approach using a nominal known object trajectory generated by sliding and rotating an object over the sensor array using a second, high accuracy, industrial robot. Rotation and slid estimation can later be used to improve grasping quality and dexterity