TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Haptics

  • A Compact Tactile Display Suitable for Integration in VR and Teleoperation Authors: Sarakoglou, Ioannis; Tsagarakis, Nikolaos; Caldwell, Darwin G.
    Haptic feedback should integrate kinaesthetic and tactile feedback. However current haptic displays do not satisfy the stringent performance and design requirements for integration in teleoperation and VR. This work presents the development of a compact, high performance tactile display for the fingertip. The compact design, high performance, reliability, and simple connectivity of this display make it suitable for immediate integration in current VR and master-slave haptic systems. In terms of performance this display achieves an excellent combination of force, amplitude and spatiotemporal resolution at the tactors, surpassing the performance of devices of a similar footprint. Its operation is based on the display of surface shape to an area of the fingertip through a 4x4 array of vertically moving tactors. The tactors are spring loaded and are actuated remotely by dc motors through a flexible tendon transmission. This work presents the overall design, control and performance of the device. A preliminary analysis of the transmission system is presented and is used to compensate for output errors induced by component elasticity.
  • Risk-Sensitive Optimal Feedback Control for Haptic Assistance Authors: Medina Hernandez, Jose Ramon; Lee, Dongheui; Hirche, Sandra
    While human behavior prediction can increase the capability of a robotic partner to generate anticipatory behavior during physical human robot interaction (pHRI), predictions in uncertain situations can lead to large disturbances for the human if they do not match the human intentions. In this paper, we present a risk-sensitive optimal feedback controller for haptic assistance. The human behavior is modeled using probabilistic learning methods and any unexpected disturbance is considered as a source of noise. The controller considers the inherent uncertainty of the probabilistic model and the process noise in the dynamics in order to adapt the behavior of the robot accordingly. The proposed approach is evaluated in situations with different uncertainties, process noise and risk-sensitivities in a 2 Degree-of-Freedom virtual reality setup.
  • Integration Framework for NASA NextGen Volumetric Cockpit Situation Display with Haptic Feedback Authors: Robles, Jose; Sguerri, Matthew; Rorie, Conrad; Vu, Kim-Phuong; Strybel, Thomas; Marayong, Panadda
    In this paper, we present a framework for the integration of force feedback information in a NASA NextGen Volumetric Cockpit Situation Display (CSD). With the current CSD, the user retrieves operational information solely through visual displays and interacts with the CSD tools through using a mouse. The advanced capabilities of the CSD may require complex manipulation of information which may be difficult to perform with input devices found in today’s cockpits. Performance with the CSD could benefit from a new user input device and enhanced user feedback modalities that can be operated safely, effectively, and intuitively in a cockpit environment. In this work, we investigate the addition of force feedback in two key CSD tasks: object selection and route manipulation. Different force feedback models were applied to communicate guidance commands, such as collision avoidance and target contact. We also discuss the development of a GUI-based software interface to allow the integration of a haptic device for the CSD. A preliminary user study was conducted on a testbed system using the Novint Falcon force-feedback device. A full experiment, assessing the effectiveness and usability of the feedback model in the CSD, will be performed in the next phase of our research.
  • Wearable Haptic Device for Cutaneous Force and Slip Speed Display Authors: Damian, Dana; Ludersdorfer, Marvin; Kim, Yeongmi; Hernandez Arieta, Alejandro; Pfeifer, Rolf; Okamura, Allison M.
    Stable grasp is the result of sensorimotor regulation of forces, ensuring sufficient grip force and the integrity of the held object. Grasping with a prosthesis introduces the challenge of finding the appropriate forces given the engineered sensorimotor prosthetic interface. Excessive force leads to unnecessary energy use and possible damage to the object. In contrast, low grip forces lead to slippage. In order for a prosthetic hand to achieve a stable grasp, the haptic information provided to the prosthesis wearer needs to display these two antagonistic grasp metrics (force and slip) in a quantified way. We present the design and evaluation of a wearable single-actuator haptic device that relays multi-modal haptic information, such as grip force and slip speed. Two belts that are activated in a mutually exclusive manner by the rotation direction of a single motor exert normal force and tangential motion on the skin surface, respectively. The wearable haptic device is able to display normal forces as a tap frequency in the range of approximately 1.5-5.0~Hz and slip speed in the range of 50-200~mm/s. Within these values, users are able to identify at least four stimulation levels for each feedback modality, with short-term training.
  • Development of a Haptic Interface Using MR Fluid for Displaying Cutting Forces of Soft Tissues Authors: Tsujita, Teppei; Ohara, Manabu; Sase, Kazuya; Konno, Atsushi; Nakayama, Masano; Abe, Koyu; Uchiyama, Masaru
    In open abdominal surgical procedures, many surgical instruments, e.g., knives, cutting shears and clamps, are generally used. Therefore, a haptic interface should display reaction force of a soft biological tissue through such a surgical instrument. Simplest solution for this difficulty is that an actual instrument is mechanically mounted on the traditional haptic interface driven by servomotors. However, operators lose a sense of reality when they change the instrument since they must perform a procedure which is not required in actual surgery for attaching/detaching the instrument to/from the haptic interface. Therefore, a novel haptic interface using MR (Magneto-Rheological) fluid is developed in this research. Rheological property of MR fluid can be changed in a short time by applied magnetic flux density. By cutting the fluid using a surgical instrument, operators can feel resistance force as if they cut tissue. However, MR fluid cannot display large deformation of soft tissues since elastic region of MR fluid is small. Therefore, a container of the fluid is moved by a motion table driven by servomotors. In this paper, concept and design of the haptic interface and performance evaluations are described.
  • Six-Degree-Of-Freedom Haptic Simulation of Organ Deformation in Dental Operations Authors: Wang, Dangxiao; Liu, Shuai; Zhang, Xin; Zhang, Yuru; Xiao, Jing
    Six-degree-of-freedom (6-DOF) haptic rendering is challenging when multi-region contacts occur between the graphic avatar of a haptic tool operated by a human user, which we call the graphic tool, and deformable objects. In this paper, we introduce a novel approach for deformation modeling based on a spring-sphere tree representation of deformable objects and a configuration-based constrained optimization method for determining the 6-dimensional configuration of the graphic tool and the contact force/torque response to the tool. This method conducts collision detection, deformation computation, and tool configuration optimization very efficiently based on the spring-sphere tree model, avoids inter-penetration, and maintains stability of haptic display without using virtual coupling. Experiments on typical dental operations are carried out to validate the efficiency and stability of the proposed method. The update rate of the haptic simulation loop is maintained at ~1kHz.

Learning and Adaptation Control of Robotic Systems II

  • Online Learning of Varying Stiffness through Physical Human-Robot Interaction Authors: Kronander, Klas; Billard, Aude
    Programming by Demonstration offers an intuitive framework for teaching robots how to perform various tasks without having to preprogram them. It also offers an intuitive way to provide corrections and refine teaching during task execution. Previously, mostly position constraints have been taken into account when teaching tasks from demonstrations. In this work, we tackle the problem of teaching tasks that require or can benefit from varying stiffness. This extension is not trivial, as the teacher needs to have a way of communicating to the robot what stiffness it should use. We propose a method by which the teacher can modulate the stiffness of the robot in any direction through physical interaction. The system is incremental and works online, so that the teacher can instantly feel how the robot learns from the interaction. We validate the proposed approach on two experiments on a 7-Dof Barrett WAM arm.
  • Reinforcement Planning: RL for Optimal Planners Authors: Zucker, Matthew; Bagnell, James
    Search based planners such as A* and Dijkstra’s algorithm are proven methods for guiding today’s robotic systems. Although such planners are typically based upon a coarse approximation of reality, they are nonetheless valuable due to their ability to reason about the future, and to generalize to previously unseen scenarios. However, encoding the desired behavior of a system into the underlying cost function used by the planner can be a tedious and error-prone task. We introduce Reinforcement Planning, which extends gradient based reinforcement learning algorithms to automatically learn useful surrogate cost functions for optimal planners. Reinforcement Planning presents several advantages over other learning approaches to planning in that it is not limited by the expertise of a human demonstrator, and that it acknowledges the domain of the planner is a simplified model of the world. We demonstrate the effectiveness of our method in learning to solve a noisy physical simulation of the well-known “marble maze” toy.
  • Adaptive Collaborative Estimation of Multi-Agent Mobile Robotic Systems Authors: Nestinger, Stephen; Demetriou, Michael
    Collaborative multi-robot systems are used in a vast array of fields for their innate ability to parallelize domain problems for faster execution. These systems are generally comprised of multiple identical robotic systems in order to simplify manufacturability and programmability, reduce cost, and provide fault tolerance. This work takes advantage of the homogeneity and multiplicity of multi-robot systems to enhance the convergence rate of adaptive dynamic parameter estimation through collaboration. The collaborative adaptive dynamic parameter estimation of multi-robot systems is accomplished by penalizing the pair-wise disagreement of both state and parameter estimates. Consensus and convergence is based on Lyapunov stability arguments. Simulation studies with multiple Pioneer 3-DX systems provides verification of the proposed theoretic collaborative adaptive parameter estimation predictions.
  • Lingodroids: Learning Terms for Time Authors: Heath, Scott Christopher; Schulz, Ruth; Ball, David; Wiles, Janet
    For humans and robots to communicate using natural language it is necessary for the robots to develop concepts and associated terms that correspond to the human use of words. Time and space are foundational concepts in human language, and to develop a set of words that correspond to human notions of time and space, it is necessary to take into account the way that they are used in natural human conversations, where terms and phrases such as ‘soon’, ‘in a while’, or ‘near’ are often used. We present language learning robots called Lingodroids that can learn and use simple terms for time and space. In previous work, the Lingodroids were able to learn terms for space. In this work we extend their abilities by adding temporal variables which allow them to learn terms for time. The robots build their own maps of the world and interact socially to form a shared lexicon for location and duration terms. The robots successfully use the shared lexicons to communicate places and times to meet again.
  • Teaching Nullspace Constraints in Physical Human-Robot Interaction Using Reservoir Computing Authors: Nordmann, Arne; Rüther, Stefan; Wrede, Sebastian; Steil, Jochen J.
    A major goal of current robotics research is to enable robots to become co-workers that collaborate with humans efficiently and adapt to changing environments or workflows. We present an approach utilizing the physical interaction capabilities of compliant robots with data-driven and model-free learning in a coherent system in order to make fast reconfiguration of redundant robots feasible. Users with no particular robotics knowledge can perform this task in physical interaction with the compliant robot, for example to reconfigure a work cell due to changes in the environment. For fast and efficient training of the respective mapping, an associative reservoir neural network is employed. It is embedded in the motion controller of the system, hence allowing for execution of arbitrary motions in task space. We describe the training, exploration and the control architecture of the systems as well as present an evaluation on the KUKA Light-Weight Robot. Our results show that the learned model solves the redundancy resolution problem under the given constraints with sufficient accuracy and generalizes to generate valid joint-space trajectories even in untrained areas of the workspace.
  • A Bayesian Nonparametric Approach to Modeling Battery Health Authors: Joseph, Joshua; Doshi, Finale; Roy, Nicholas
    The batteries of many consumer products are often both a substantial portion of the item's cost and commonly a first point of failure. Accurately predicting remaining battery life can lower costs by reducing unnecessary battery replacements. Unfortunately, battery dynamics are extremely complex, and we often lack the domain knowledge required to construct a model by hand. In this work, we take a data-driven approach and aim to learn a model of battery time-to-death from training data. Using a Dirichlet process prior over mixture weights, we learn an infinite mixture model for battery health. The Bayesian aspect of our model helps to avoid over-fitting while the nonparametric nature of the model allows the data to control the size of the model, preventing under-fitting. We demonstrate our model's effectiveness by making time-to-death predictions using real data from iRobot Roomba batteries.

Parts Handling and Manipulation

  • Design of Parts Handling and Gear Assembling Device Authors: Yamaguchi, Kengo; Hirata, Yasuhisa; Kaisumi, Aya; Kosuge, Kazuhiro
    Many one-degree-of-freedom (1-DOF) grippers have been used in factories. This paper focuses on the design of the 1-DOF parts handling device for picking up small objects robustly and agilely and realizing assembly tasks. In our conventional research, we proposed a concept for the handling device, which cages an object without letting the object escape from its tips before closing them completely and then grasps the object robustly at a unique position of the tips. In this paper, we propose a method for designing the shape of the device's tips by considering not only the caging and self-alignment of the object but also the gear assembly task. We also develop the robust and agile pick-up device (RAPiD) with tips designed by the new method and present experimental results that illustrate the ability of RAPiD to handle and assemble gears.
  • Optimal Admittance Characteristics for Planar Force-Assembly of Convex Polygonal Parts Authors: Wiemer, Steven; Schimmels, Joseph
    Robots are not typically used for assembly tasks in which positioning requirements exceed robot capabilities. To address this limitation, a significant amount of work has been directed toward identifying desirable mechanical behavior of a robot for force-guided assembly. Most of this work has been directed toward the `standard' peg-in-hole assembly problem. Little has been done to identify the specific behavior necessary for reliable assembly for different types of polygonal parts, and little has been done relating assembly characteristics to classes of part geometries. This paper presents the best passive admittance and associated maximum coefficient of friction for planar force-assembly of a variety of different polygonal parts, specifically pegs with rectangular, trapezoidal, triangular, and pentagonal cross sections. The results show that force-guided assembly can be reliably achieved at higher values of friction when parts are shorter and wider. For all geometries considered, force-guided assembly is ensured for any value of friction less than 0.8 when the optimal admittance is used; and, for some geometries, for any value of friction less than 15.
  • The Effect of Anisotropic Friction on Vibratory Velocity Fields Authors: Umbanhowar, Paul; Vose, Thomas; Mitani, Atsushi; Hirai, Shinichi; Lynch, Kevin
    This paper explores the role of anisotropic friction properties in vibratory parts manipulation. We show that direction-dependent surface friction properties can be used in conjunction with a vibrating plate to help design friction-induced velocity fields on the surface of the plate. Theoretical, simulation, and experimental results are presented quantifying the anisotropic friction effects of textured surfaces such as micromachined silicon and fabrics.
  • Sparse Spatial Coding: A Novel Approach for Efficient and Accurate Object Recognition Authors: Leivas, Gabriel; Nascimento, Erickson; Wilson Vieira, Antonio; Campos, Mario Montenegro
    Successful state-of-the-art object recognition techniques from images have been based on powerful methods, such as sparse representation, in order to replace the also popular vector quantization (VQ) approach. Recently, sparse coding, which is characterized by representing a signal in a sparse space, has raised the bar on several object recognition benchmarks. However, one serious drawback of sparse space based methods is that similar local features can be quantized into different visual words. We present in this paper a new method, called Sparse Spatial Coding (SSC), which combines a sparse coding dictionary learning, a spatial constraint coding stage and an online classification method to improve object recognition. An efficient new off-line classification algorithm is also presented. We overcome the problem of techniques which make use of sparse representation alone by generating the final representation with SSC and max pooling, presented for an online learning classifier. Experimental results obtained on the Caltech 101, Caltech 256, Corel 5000 and Corel 10000 databases, show that, to the best of our knowledge, our approach supersedes in accuracy the best published results to date on the same databases. As an extension, we also show high performance results on the MIT-67 indoor scene recognition dataset.
  • Humanoid's Dual Arm Object Manipulation Based on Virtual Dynamics Model Authors: Shin, Sung Yul; Lee, Jun won; Kim, ChangHwan
    In order to implement promising robot applications in our daily lives, robots need to perform manipulation tasks within the human environments. Especially for a humanoid robot, it is essential to manipulate a variety of objects with different shapes and sizes to assist humans in the human environments. This paper presents a method of manipulating objects with humanoid robot's dual arms. The robot is usually asked to control both the motion and force to manipulate the objects. We propose a novel concept of control method based on the virtual dynamics model (VDM), which enables the robot to perform both tasks of reaching to an object and grasping it under the uniform control system. Furthermore, the impedance model based on the VDM controller also enables the robot to safely grasp an object by reducing the impact at the contact point. The proposed algorithm is implemented on the humanoid robot, Mahru, with independent joint controller at each motor. Its performance is demonstrated by manipulating different types of objects.
  • A Kernel-Based Approach to Direct Action Perception Authors: Kroemer, Oliver; Ugur, Emre; Oztop, Erhan; Peters, Jan
    The direct perception of actions allows a robot to predict the afforded actions of observed objects. In this paper, we present a non-parametric approach to representing the affordance-bearing subparts of objects. This representation forms the basis of a kernel function for computing the similarity between different subparts. Using this kernel function, together with motor primitive actions, the robot can learn the required mappings to perform direct action perception. The proposed approach was successfully implemented on a real robot, which could then quickly learn to generalize grasping and pouring actions to novel objects.