TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Vision-Based Attention and Interaction

  • Computing Object-Based Saliency in Urban Scenes Using Laser Sensing Authors: Zhao, Yipu; He, Mengwen; Zhao, Huijing; Davoine, Franck; Zha, Hongbin
    It becomes a well-known technology that a low-level map of complex environment containing 3D laser points can be generated using a robot with laser scanners. Given a cloud of 3D laser points of an urban scene, this paper proposes a method for locating the objects of interest, e.g. traffic signs or road lamps, by computing object-based saliency. Our major contributions are: 1) a method for extracting simple geometric features from laser data is developed, where both range images and 3D laser points are analyzed; 2) an object is modeled as a graph used to describe the composition of geometric features; 3) a graph matching based method is developed to locate the objects of interest on laser data. Experimental results on real laser data depicting urban scenes are presented; efficiency as well as limitations of the method are discussed.
  • Where Do I Look Now? Gaze Allocation During Visually Guided Manipulation Authors: Nunez-Varela, Jose; Ravindran, Balaraman; Wyatt, Jeremy
    In this work we present principled methods for the coordination of a robot's oculomotor system with the rest of its body motor systems. The problem is to decide which physical actions to perform next and where the robot's gaze should be directed in order to gain information that is relevant to the success of its physical actions. Previous work on this problem has shown that a reward-based coordination mechanism provides an efficient solution. However, that approach does not allow the robot to move its gaze to different parts of the scene, it considers the robot to have only one motor system, and assumes that the actions have the same duration. The main contributions of our work are to extend that previous reward-based approach by making decisions about where to fixate the robot's gaze, handling multiple motor systems, and handling actions of variable duration. We compare our approach against two common baselines: random and round robin gaze allocation. We show how our method provides a more effective strategy to allocate gaze where it is needed the most.
  • 3D AAM Based Face Alignment under Wide Angular Variations Using 2D and 3D Data Authors: Wang, Chieh-Chih
    Active Appearance Models (AAMs) are widely used to estimate the shape of the face together with its orientation, but AAM approaches tend to fail when the face is under wide angular variations. Although it is feasible to capture the overall 3D face structure using 3D data from range cameras, the locations of facial features are often estimated imprecisely or incorrectly due to depth measurement uncertainty. Face alignment using 2D and 3D images suffer from different issues and have varying reliability in different situations. The existing approaches introduce a weighting function to balance 2D and 3D alignments in which the weighting function is tuned manually and the sensor characteristics are not taken into account. In this paper, we propose to balance 3D face alignment using 2D and 3D data based on the observed data and the sensors characteristics. The feasibility of wide-angle face alignment is demonstrated using two different sets of depth and conventional cameras. The experimental results show that a stable alignment is achieved with a maximum improvement of 26% compared to 3D AAM using 2D image and 30% improvement over the state-of-the-art 3DMM methods in terms of 3D head pose estimation.
  • Robots That Validate Learned Perceptual Models Authors: Klank, Ulrich; Mösenlechner, Lorenz; Maldonado, Alexis; Beetz, Michael
    Service robots that should operate autonomously need to perform actions reliably, and be able to adapt to their changing environment using learning mechanisms. Optimally, robots should learn continuously but this approach often suffers from problems like over-fitting, drifting or dealing with incomplete data. In this paper, we propose a method to automatically validate autonomously acquired perception models. These perception models are used to localize objects in the environment with the intention of manipulating them with the robot. Our approach verifies the learned perception models by moving the robot, trying to re-detect an object and then to grasp it. From observable failures of these actions and high-level loop-closures to validate the eventual success, we can derive certain qualities of our models and our environment. We evaluate our approach by using two different detection algorithms, one using 2D RGB data and one using 3D point clouds. We show that our system is able to improve the perception performance significantly by learning which of the models is better in a certain situation and a specific context. We show how additional validation allows for successful continuous learning. The strictest precondition for learning such perceptual models is correct segmentation of objects which is evaluated in a second experiment.
  • Uncalibrated Visual Servoing for Intuitive Human Guidance of Robots Authors: Marshall, Matthew; Matthews, James; Hu, Ai-Ping; McMurray, Gary
    We propose a novel implementation of visual servoing whereby a human operator can guide a robot relative to the coordinate frame of an eye-in-hand camera. Among other applications, this can allow the operator to work in the image space of the eye-in-hand camera. This is achieved using a gamepad, a time-of-flight camera (an active sensor that creates depth data), and recursive least-squares update with Gauss-Newton control. Contributions of this paper include the use of a person to cause the control action in a visual-servoing system, and the introduction of uncalibrated position-based visual servoing. The system's efficacy is evaluated via trials involving human operators in different scenarios.
  • Leveraging RGB-D Data: Adaptive Fusion and Domain Adaptation for Object Detection Authors: Spinello, Luciano; Luber, Matthias; Arras, Kai Oliver
    Vision and range sensing belong to the richest sensory modalities for perception in robotics and related fields. This paper addresses the problem of how to best combine image and range data for the task of object detection. In particular, we propose a novel adaptive fusion approach, hierarchical Gaussian Process mixtures of experts, able to account for missing information and cross-cue data consistency. The hierarchy is a two-tier architecture that for each modality, each frame and each detection computes a weight function using Gaussian Processes that reflects the confidence of the respective information. We further propose a method called cross-cue domain adaptation that makes use of large image data sets to improve the depth-based object detector for which only few training samples exist. In the experiments that include a comparison with alternative sensor fusion schemes, we demonstrate the viability of the proposed methods and achieve significant improvements in classification accuracy.

Control and Planning for UAVs

  • Deploying the Max-Sum Algorithm for Decentralised Coordination and Task Allocation of Unmanned Aerial Vehicles for Live Aerial Imagery Collection Authors: Delle Fave, Francesco Maria; Rogers, Alex; Xu, Zhe; Sukkarieh, Salah; Jennings, Nick
    We introduce a new technique for coordinating teams of unmanned aerial vehicles (UAVs) when deployed to collect live aerial imagery of the scene of a disaster. We define this problem as one of task assignment where the UAVs dynamically coordinate over tasks representing the imagery collection requests. To measure the quality of the assignment of one or more UAVs to a task, we propose a novel utility function which encompasses several constraints, such as the task's importance and the UAVs' battery capacity so as to maximise performance. We then solve the resulting optimisation problem using a fully asynchronous and decentralised implementation of the max-sum algorithm, a well known message passing algorithm previously used only in simulated domains. Finally, we evaluate our approach both in simulation and on real hardware. First, we empirically evaluate our utility and show that it yields a better trade off between the quantity and quality of completed tasks than similar utilities that do not take all the constraints into account. Second, we deploy it on two hexacopters and assess its practical viability in the real world.
  • Mixed-Integer Quadratic Program Trajectory Generation for Heterogeneous Quadrotor Teams Authors: Mellinger, Daniel; Kushleyev, Aleksandr; Kumar, Vijay
    We present an algorithm for the generation of optimal trajectories for teams of heterogeneous quadrotors in three-dimensional environments with obstacles. We formulate the problem using mixed-integer quadratic programs (MIQPs) where the integer constraints are used to enforce collision avoidance. The method allows for different sizes, capabilities, and varying dynamic effects between different quadrotors. Experimental results illustrate the method applied to teams of up to four quadrotors ranging from 65 to 962 grams and 21 to 67 cm in width following trajectories in three-dimensional environments with obstacles with accelerations approaching 1g.
  • Safety Verification of Reactive Controllers for UAV Flight in Cluttered Environments Using Barrier Certificates Authors: Barry, Andrew J.; Majumdar, Anirudha; Tedrake, Russ
    Unmanned aerial vehicles (UAVs) have a so-far untapped potential to operate at high speeds through cluttered environments. Many of these systems are limited by their ad-hoc reactive controllers using simple visual cues like optical flow. Here we consider the problem of formally verifying an output-feedback controller for an aircraft operating in an unknown environment. Using recent advances in sums-of-squares programming that allow for efficient computation of barrier functions, we search for global certificates of safety for the closed-loop system in a given environment. In contrast to previous work, we use rational functions to globally approximate non-smooth dynamics and use multiple barrier functions to guard against more than one obstacle. We expect that these formal verification techniques will allow for the comparison, and ultimately optimization, of reactive controllers for robustness to varying initial conditions and environments.
  • On-board Velocity Estimation and Closed-loop Control of a Quadrotor UAV based on Optical Flow Authors: Grabe, Volker; Buelthoff, Heinrich H.; Robuffo Giordano, Paolo
    Robot vision became a field of increasing importance in micro aerial vehicle robotics with the availability of small and light hardware. While most approaches rely on external ground stations because of the need of high computational power, we will present a full autonomous setup using only on-board hardware. Our work is based on the continuous homography constraint to recover ego-motion from optical flow. Thus we are able to provide an efficient fall back routine for any kind of UAV (Unmanned Aerial Vehicles) since we rely solely on a monocular camera and on on-board computation. In particular, we devised two variants of the classical continuous 4-point algorithm and provided an extensive experimental evaluation against a known ground truth. The results show that our approach is able to recover the ego-motion of a flying UAV in realistic conditions and by only relying on the limited on-board computational power. Furthermore, we exploited the velocity estimation for closing the loop and controlling the motion of the UAV online.
  • Visual Terrain Classification by Flying Robots Authors: Khan, Yasir Niaz; Masselli, Andreas; Zell, Andreas
    In this paper we investigate the effectiveness of SURF features for visual terrain classification for outdoor flying robots. A quadrocopter fitted with a single camera is flown over different terrains to take images of the ground below. Each image is divided into a grid and SURF features are calculated at grid intersections. A classifier is then used to learn to differentiate between different terrain types. Classification results of the SURF descriptor are compared with results from other texture descriptors like Local Binary Patterns and Local Ternary Patterns. Six different terrain types are considered in this approcah. Random forests are used for classification on each descriptor. It is shown that SURF features perform better than other descriptors at higher resolutions.
  • Real-Time Decentralized Search with Inter-Agent Collision Avoidance Authors: Gan, Seng Keat; Fitch, Robert; Sukkarieh, Salah
    This paper addresses the problem of coordinating a team of mobile autonomous sensor agents performing a cooperative mission while explicitly avoiding inter-agent collisions in a team negotiation process. Many multi-agent cooperative approaches disregard the potential hazards between agents, which are an important aspect to many systems and especially for airborne systems. In this work, team negotiation is performed using a decentralized gradient-based optimization approach whereas safety distance constraints are specifically designed and handled using Lagrangian multiplier methods. The novelty of our work is the demonstration of a decentralized form of inter-agent collision avoidance in the loop of the agents' real-time group mission optimization process, where the algorithm inherits the properties of performing its original mission while minimizing the probability of inter-agent collisions. Explicit constraint gradient formulation is derived and used to enhance computational advantage and solution accuracy. The effectiveness and robustness of our algorithm has been verified in a simulated environment by coordinating a team of UAVs searching for targets in a large-scale environment.

Industrial Robotics

  • Tool Position Estimation of a Flexible Industrial Robot Using Recursive Bayesian Methods Authors: Axelsson, Patrik; Karlsson, Rickard; Norrlöf, Mikael
    A sensor fusion method for state estimation of a flexible industrial robot is presented. By measuring the acceleration at the end-effector, the accuracy of the arm angular position is improved significantly when these measurements are fused with motor angle observation. The problem is formulated in a Bayesian estimation framework and two solutions are proposed; one using the extended Kalman filter (EKF) and one using the particle filter (PF). The technique is verified on experiments on the ABB IRB4600 robot, where the accelerometer method is showing a significant better dynamic performance, even when model errors are present.
  • A Sensor-Based Approach for Error Compensation of Industrial Robotic Workcells Authors: Tao, Pey Yuen; Yang, Guilin; Tomizuka, Masayoshi
    Industrial robotic manipulators have excellent repeatability while accuracy is significantly poorer. Numerous error sources in the robotic workcell contributes to the accuracy problem. Modeling and identification of all the errors to achieve the required levels of accuracy may be difficult. To resolve the accuracy issues, a sensor based indirect error compensation approach is proposed in this paper where the errors are compensated online via measurements of the work object. The sensor captures a point cloud of the work object and with the CAD model of the work object, the actual relative pose of the sensor frame and work object frame can be established via a point cloud registration. Once this relationship has been established, the robot will be able to move the tool accurately relative to the work object frame near the point of compensation. A data pre-processing technique is proposed to reduce computation time and prevent a local minima solution during point cloud registration. A simulation study is presented to illustrate the effectiveness of the proposed solution.
  • Robot End-Effector Sensing with Position Sensitive Detector and Inertial Sensors Authors: Wang, Cong; Chen, Wenjie; Tomizuka, Masayoshi
    For the motion control of industrial robots, the end-effector performance is of the ultimate interest. However, industrial robots are generally only equipped with motor-side encoders. Accurate estimation of the end-effector position and velocity is thus difficult due to complex joint dynamics. To overcome this problem, this paper presents an optical sensor based on position sensitive detector (PSD), referred as PSD camera, for direct end-effector position sensing. PSD features high precision and fast response while being cost-effective, thus is favorable for real-time feedback applications. In addition, to acquire good velocity estimation, a kinematic Kalman filter (KKF) is applied to fuse the measurement from the PSD camera with that from inertial sensors mounted on the end-effector. The performance of the developed PSD camera and the application of the KKF sensor fusion scheme have been validated through experiments on an industrial robot.
  • Experiments towards Automated Sewing with a Multi-Robot System Authors: Schrimpf, Johannes; Wetterwald, Lars Erik
    In this paper a concept for automated multi-robot-aided sewing is presented. The objective of the work is to demonstrate automatic sewing of 3D-shaped covers for recliners, by assembling two different hide parts with different shapes, using two robots to align the parts during sewing. The system consists of an industrial sewing machine and two real-time controlled Universal Robots 6-axis industrial manipulators. A force feedback system combined with optical edge sensors is evaluated for the control of the sewing process. The force sensors are used to synchronize the velocity and feed rate between the robots and the sewing machine. A test cell was built to determine the feasibility of the force feedback control and velocity synchronization. Experiments are presented which investigate the ability of the robot to feed a hide part into the sewing machine using a force sensor and different strategies for velocity synchronization.
  • Automated Throwing and Capturing of Cylinder-Shaped Objects Authors: Frank, Thorsten; Janoske, Uwe; Mittnacht, Anton; Schroedter, Christian
    A new approach for transportation of objects within production systems by automated throwing and capturing is investigated. This paper presents an implementation, consisting of a throwing robot and a capturing robot. The throwing robot uses a linear and the capturing robot a rotary axis. The throwing robot is capable of throwing cylinder-shaped objects onto a target point with high precision. The capturing robot there smoothly grips the cylinders during flight by means of a rotational movement. In order to synchronize the capturing robot and the cylinder’s pose and velocity, its trajectory has to be modeled as well as the motion sequences of both robots. The throwing and capturing tasks are performed by the robots automatically without the use of any external sensor system.