TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Learning and Adaptation Control of Robotic Systems II

  • Online Learning of Varying Stiffness through Physical Human-Robot Interaction Authors: Kronander, Klas; Billard, Aude
    Programming by Demonstration offers an intuitive framework for teaching robots how to perform various tasks without having to preprogram them. It also offers an intuitive way to provide corrections and refine teaching during task execution. Previously, mostly position constraints have been taken into account when teaching tasks from demonstrations. In this work, we tackle the problem of teaching tasks that require or can benefit from varying stiffness. This extension is not trivial, as the teacher needs to have a way of communicating to the robot what stiffness it should use. We propose a method by which the teacher can modulate the stiffness of the robot in any direction through physical interaction. The system is incremental and works online, so that the teacher can instantly feel how the robot learns from the interaction. We validate the proposed approach on two experiments on a 7-Dof Barrett WAM arm.
  • Reinforcement Planning: RL for Optimal Planners Authors: Zucker, Matthew; Bagnell, James
    Search based planners such as A* and Dijkstra’s algorithm are proven methods for guiding today’s robotic systems. Although such planners are typically based upon a coarse approximation of reality, they are nonetheless valuable due to their ability to reason about the future, and to generalize to previously unseen scenarios. However, encoding the desired behavior of a system into the underlying cost function used by the planner can be a tedious and error-prone task. We introduce Reinforcement Planning, which extends gradient based reinforcement learning algorithms to automatically learn useful surrogate cost functions for optimal planners. Reinforcement Planning presents several advantages over other learning approaches to planning in that it is not limited by the expertise of a human demonstrator, and that it acknowledges the domain of the planner is a simplified model of the world. We demonstrate the effectiveness of our method in learning to solve a noisy physical simulation of the well-known “marble maze” toy.
  • Adaptive Collaborative Estimation of Multi-Agent Mobile Robotic Systems Authors: Nestinger, Stephen; Demetriou, Michael
    Collaborative multi-robot systems are used in a vast array of fields for their innate ability to parallelize domain problems for faster execution. These systems are generally comprised of multiple identical robotic systems in order to simplify manufacturability and programmability, reduce cost, and provide fault tolerance. This work takes advantage of the homogeneity and multiplicity of multi-robot systems to enhance the convergence rate of adaptive dynamic parameter estimation through collaboration. The collaborative adaptive dynamic parameter estimation of multi-robot systems is accomplished by penalizing the pair-wise disagreement of both state and parameter estimates. Consensus and convergence is based on Lyapunov stability arguments. Simulation studies with multiple Pioneer 3-DX systems provides verification of the proposed theoretic collaborative adaptive parameter estimation predictions.
  • Lingodroids: Learning Terms for Time Authors: Heath, Scott Christopher; Schulz, Ruth; Ball, David; Wiles, Janet
    For humans and robots to communicate using natural language it is necessary for the robots to develop concepts and associated terms that correspond to the human use of words. Time and space are foundational concepts in human language, and to develop a set of words that correspond to human notions of time and space, it is necessary to take into account the way that they are used in natural human conversations, where terms and phrases such as ‘soon’, ‘in a while’, or ‘near’ are often used. We present language learning robots called Lingodroids that can learn and use simple terms for time and space. In previous work, the Lingodroids were able to learn terms for space. In this work we extend their abilities by adding temporal variables which allow them to learn terms for time. The robots build their own maps of the world and interact socially to form a shared lexicon for location and duration terms. The robots successfully use the shared lexicons to communicate places and times to meet again.
  • Teaching Nullspace Constraints in Physical Human-Robot Interaction Using Reservoir Computing Authors: Nordmann, Arne; Rüther, Stefan; Wrede, Sebastian; Steil, Jochen J.
    A major goal of current robotics research is to enable robots to become co-workers that collaborate with humans efficiently and adapt to changing environments or workflows. We present an approach utilizing the physical interaction capabilities of compliant robots with data-driven and model-free learning in a coherent system in order to make fast reconfiguration of redundant robots feasible. Users with no particular robotics knowledge can perform this task in physical interaction with the compliant robot, for example to reconfigure a work cell due to changes in the environment. For fast and efficient training of the respective mapping, an associative reservoir neural network is employed. It is embedded in the motion controller of the system, hence allowing for execution of arbitrary motions in task space. We describe the training, exploration and the control architecture of the systems as well as present an evaluation on the KUKA Light-Weight Robot. Our results show that the learned model solves the redundancy resolution problem under the given constraints with sufficient accuracy and generalizes to generate valid joint-space trajectories even in untrained areas of the workspace.
  • A Bayesian Nonparametric Approach to Modeling Battery Health Authors: Joseph, Joshua; Doshi, Finale; Roy, Nicholas
    The batteries of many consumer products are often both a substantial portion of the item's cost and commonly a first point of failure. Accurately predicting remaining battery life can lower costs by reducing unnecessary battery replacements. Unfortunately, battery dynamics are extremely complex, and we often lack the domain knowledge required to construct a model by hand. In this work, we take a data-driven approach and aim to learn a model of battery time-to-death from training data. Using a Dirichlet process prior over mixture weights, we learn an infinite mixture model for battery health. The Bayesian aspect of our model helps to avoid over-fitting while the nonparametric nature of the model allows the data to control the size of the model, preventing under-fitting. We demonstrate our model's effectiveness by making time-to-death predictions using real data from iRobot Roomba batteries.

Multi-Legged Robots

  • Stable Dynamic Walking of a Quadruped "Kotetsu" Using Phase Modulations Based on Leg Loading/Unloading against a Lateral Perturbation Authors: Maufroy, Christophe; Kimura, Hiroshi; Nishikawa, Tomohiro
    We intend to show the basis of a general legged locomotion controller with the ability to integrate both posture and rhythmic motion controls. We respectively used leg loading and unloading for the phase transitions from swingto- stance and stance-to-swing, and showed the following in our previous 3D model simulation study: (a) as a result of the phase modulations based on leg loading/unloading, rhythmic motion of each leg was achieved and leg coordination (resulting in a gait) emerged, even without explicit coordination among the leg controllers, allowing to realize dynamic walking in the low- to medium-speed range (b) but an additional ascending coordination mechanism between ipsilateral leg controllers was necessary to improve the stability. In this paper, we report on experimental results using “Kotetsu” under a lateral perturbation while walking and compare them with the results of our previous simulations.
  • Dynamic Torque Control of a Hydraulic Quadruped Robot Authors: Boaventura, Thiago; Semini, Claudio; Buchli, Jonas; Frigerio, Marco; Focchi, Michele; Caldwell, Darwin G.
    Legged robots have the potential to serve as versatile and useful autonomous robotic platforms for use in unstructured environments such as disaster sites. They need to be both capable of fast dynamic locomotion and precise movements. However, there is a lack of platforms with suitable mechanical properties and adequate controllers to advance the research in this direction. In this paper we are presenting results on the novel research platform HyQ, a torque controlled hydraulic quadruped robot. We identify the requirements for versatile robotic legged locomotion and show that HyQ is fulfilling most of these specifications. We show that HyQ is able to do both static and dynamic movements and is able to cope with the mechanical requirements of dynamic movements and locomotion, such as jumping and trotting. The required control, both on hydraulic level (force/torque control) and whole body level (rigid model based control) is discussed.
  • Kinematic Control and Posture Optimization of a Redundantly Actuated Quadruped Robot Authors: Thomson, Travis; Sharf, Inna; Beckman, Blake
    Although legged locomotion for robots has been studied for many years, the research of autonomous wheel- legged robotics is much more recent. Robots of this type, also described as hybrid, can take advantage of the energy efficiency of wheeled locomotion while adapting to more difficult terrain with legged locomotion when necessary. The Micro Hydraulic Toolkit (MHT), developed by engineers at Defence R&D Canada – Suffield, is a good example of such a robot. Investigation into control and optimization techniques for MHT leads to a better understanding of hybrid vehicle control for terrestrial exploration and reconnaissance. Control of hybrid robots has been studied by several researchers during the last decade. The methodology applied in this work uses an inverse kinematics algorithm developed previously for a hybrid robot Hylos, and implements an optimization technique to minimize torques occurring at crucial actuators. As well, some added functionality is incorporated into the control method to implement stepping maneuvers. This paper will present the results obtained via co-simulation using Matlab’s Simulink and a high-fidelity model of MHT in LMS Virtual Lab.
  • Optimally Scaled Hip-Force Planning: A Control Approach for Quadrupedal Running Authors: Valenzuela, Andrés; Kim, Sangbae
    This paper presents Optimally Scaled Hip-Force Planning (OSHP), a novel approach to controlling the body dynamics of running robots. Controllers based on OSHP form the high-level component of a hierarchical control scheme in which they direct lower level controllers, each responsible for coordinating the motion of a single leg. An OSHP controller takes in the state of the runner at the apex of its primary aerial phase and returns desired profiles for the vertical and horizontal forces to be exerted at each hip during the subsequent stride. The hip force profiles returned by OSHP are scaled variants of nominal force profiles based on biological ground reaction force data. The OSHP controller determines the scaling parameters for these profiles through constrained nonlinear optimization on an approximate model of the runner's body dynamics. Evaluation of an OSHP controller for a quadruped model in simulation shows that even with very simple leg controllers, the OSHP controller can accelerate the runner from rest to steady-state running without a pre-defined footfall sequence.
  • Enforced Symmetry of the Stance Phase for the Spring-Loaded Inverted Pendulum Authors: Piovan, Giulia; Byl, Katie
    The Spring-Loaded Inverted Pendulum (SLIP) is considered the simplest model to effectively describe bouncing gaits (such as running and hopping) for many legged animals and robots. For this reason, it is has often been used as a model for robot design. A key challenge in using this model, however, is the lack of a closed-form solution for the equations of motion that define the stance phase of its dynamics. This results in the impossibility of analytically predicting its trajectory. Consequently, developing a practical control strategy to operate on the model is computationally intensive, because accurately predicting the step-to-step dynamics is still an unsolved problem. By adding an actuator in series with the spring, we can develop a control law for actuator displacement which enforces a desired trajectory during stance. In particular, for our specific chosen control law, we can compute an analytical solution for the stance phase trajectory. Furthermore, we give examples of higher level control strategies for foothold placement and for keeping the forward velocity or the apex height constant on rough terrain that employ our low-level control laws, and we illustrate through simulations the performance typical of our strategy.
  • A Behavior Based Locomotion Controller with Learning for Disturbance Compensation in Bipedal Robots Authors: Beranek, Richard; Ahmadi, Mojtaba
    A novel behavior based locomotion controller (BBLC) capable of adapting to unknown disturbances is presented. The proposed controller implements a behavior based control architecture by subdividing the walking control into several task-space controllers such as swing leg control and center of gravity (COG) position control. For each task-space controller, a number of behaviors, which plan the reference task-space trajectories, are designed based on existing stabilizing controllers or strategies inspired by human walking biomechanics. A Q-learning algorithm is used to classify which behavior combinations can compensate for specific disturbances. The controller is implemented on a planar biped simulation with push type disturbances applied on flat and sloped terrain. The results show that stabilization strategies, capable of compensating for these disturbances emerge from the combination of different task level behaviors, without a priori knowledge of the nature of the disturbances.

Localization II

  • Road Vehicle Localization with 2D Push-Broom Lidar and 3D Priors Authors: Baldwin, Ian Alan; Newman, Paul
    In this paper we describe and demonstrate a method for precisely localizing a road vehicle using a single push-broom 2D laser scanner while leveraging a prior 3D survey. In contrast to conventional scan matching, our laser is oriented downwards, thus causing continual ground strike. Our method exploits this to produce a small 3D swathe of laser data which can be matched statistically within the 3D survey. This swathe generation is predicated upon time varying estimates of vehicle velocity. While in theory this data could be obtained from vehicle speedometers, in reality these instruments are biased and so we also provide a way to estimate this bias from survey data. We show that our low cost system consistently outperforms a high calibre integrated DGPS/IMU system over 26 km of driven path around a test site.
  • Radar-Only Localization and Mapping for Ground Vehicle at High Speed and for Riverside Boat Authors: VIVET, DAMIEN; Checchin, Paul; CHAPUIS, Roland
    The use of a rotating range sensor in high speed robotics creates distortions in the collected data. Such an effect is, in the majority of studies, ignored or considered as a noise and then corrected, based on proprioceptive sensors or localization systems. In this study we consider that distortion contains the information about the vehicle's displacement. We propose to extract this information from distortion without any other information than exteroceptive sensor data. The only sensor used for this work is a panoramic Frequency Modulated Continuous Wave (FMCW) radar called K2Pi. No odometer, gyrometer or other proprioceptive sensor is used. The idea is to resort to velocimetry by analyzing the distortion of the measurements. As a result, the linear and angular velocities of the mobile robot are estimated and used to build, without any other sensor, the trajectory of the vehicle and then the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle and a riverbank application. This work can easily be extended to other slow rotating range sensors.
  • LAPS - Localisation using Appearance of Prior Structure: 6-DoF Monocular Camera Localisation using Prior Pointclouds Authors: Stewart, Alex; Newman, Paul
    This paper is about pose estimation using monocular cameras with a 3D laser pointcloud as a workspace prior. We have in mind autonomous transport systems in which low cost vehicles equipped with monocular cameras are furnished with preprocessed 3D lidar workspaces surveys. Our inherently cross-modal approach offers robustness to changes in scene lighting and is computationally cheap. At the heart of our approach lies inference of camera motion by minimisation of the Normalised Information Distance (NID) between the appearance of 3D lidar data reprojected into overlapping images. Results are presented which demonstrate the applicability of this approach to the localisation of a camera against a lidar pointcloud using data gathered from a road vehicle.
  • An Outdoor High-Accuracy Local Positioning System for an Autonomous Robotic Golf Greens Mower Authors: Smith, Aaron; Chang, H. Jacky; Blanchard, Edward
    This paper presents a high-accuracy local positioning system (LPS) for an autonomous robotic greens mower. The LPS uses a sensor tower mounted on top of the robot and four active beacons surrounding a target area. The proposed LPS concurrently determines robot location using a lateration technique and calculates orientation using angle measurements. To perform localization, the sensor tower emits an ultrasonic pulse that is received by the beacons. The time of arrival is measured by each beacon and transmitted back to the sensor tower. To determine the robot’s orientation, the sensor tower has a circular receiver array that detects infrared signals emitted by each beacon. Using the direction and strength of the received infrared signals, the relative angles to each beacon are obtained and the robot orientation can be determined. Experimental data show that the LPS achieves a position accuracy of 3.1 cm RMS, and an orientation accuracy of 0.23° RMS. Several prototype robotic mowers utilizing the proposed LPS have been deployed for field testing, and the mowing results are comparable to an experienced professional human worker.
  • Curb-Intersection Feature Based Monte Carlo Localization on Urban Roads Authors: Qin, Baoxing; Chong, Zhuang Jie; Bandyopadhyay, Tirthankar; Ang Jr, Marcelo H; Frazzoli, Emilio; Rus, Daniela
    One of the most prominent features on an urban road is the curb, which defines the boundary of a road surface. An intersection is a junction of two or more roads, appearing where no curb exists. The combination of curb and intersection features and their idiosyncrasies carry significant information about the urban road network that can be exploited to improve a vehicle's localization. This paper introduces a Monte Carlo Localization (MCL) method using the curb-intersection features on urban roads. We propose a novel idea of "Virtual LIDAR" to get the measurement models for these features. Under the MCL framework, above road observation is fused with odometry information, which is able to yield precise localization. We implement the system using a single tilted 2D LIDAR on our autonomous test bed and show robust performance in the presence of occlusion from other vehicles and pedestrians.
  • Satellite Image Based Precise Robot Localization on Sidewalks Authors: Senlet, Turgay; Elgammal, Ahmed
    In this paper, we present a novel computer vision framework for precise localization of a mobile robot on sidewalks. In our framework, we combine stereo camera images, visual odometry, satellite map matching, and a sidewalk probability transfer function obtained from street maps in order to attain globally corrected localization results. The framework is capable of precisely localizing a mobile robot platform that navigates on sidewalks, without the use of traditional wheel odometry, GPS or INS inputs. On a complex 570-meter sidewalk route, we show that we obtain superior localization results compared to visual odometry and GPS.