TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Visual Tracking

  • Generic Realtime Kernel Based Tracking Authors: Hadj-Abdelkader, Hicham; Mezouar, Youcef; Chateau, Thierry
    This paper deals with the design of a generic visual tracking algorithm suitable for a large class of camera (single viewpoint sensors). It is based on the estimation of the relationship between observations and motion on the sphere. This is efficiently achieved using a kernel-based regression function on a generic linearly-weighted sum of non-linear basis functions. We also present two set of experiments. The first one shows the efficiency of our algorithm through the tracking in video sequences acquired with three types of cameras (conventional, dioptric-fisheye and catadioptric). The real-time performances will be shown by tracking one or several planes. The second set of experiments presents an application of our tracking algorithm to visual servoing with a fisheye camera.
  • Generative Object Detection and Tracking in 3D Range Data Authors: Kaestner, Ralf; Maye, Jerome; Pilat, Yves; Siegwart, Roland
    This paper presents a novel approach to tracking dynamic objects in 3D range data. Its key contribution lies in the generative object detection algorithm which allows the tracker to robustly extract objects of varying sizes and shapes from the observations. In contrast to tracking methods using discriminative detectors, we are thus able to generalize over a wide range of object classes matching our assumptions. Whilst the generative model underlying our framework inherently scales with the complexity and the noise characteristics of the environment, all parameters involved in the detection process obey a clean probabilistic interpretation. Nevertheless, our unsupervised object detection and tracking algorithm achieves real-time performance, even in highly dynamic scenarios covering a significant amount of moving objects. Through an application to populated urban settings, we are able to show that the tracking performance of the presented approach yields results which are comparable to state-of-the-art discriminative methods.
  • Moving Vehicle Detection and Tracking in Unstructured Environments Authors: Wojke, Nicolai; Häselich, Marcel
    The detection and tracking of moving vehicles is a necessity for collision-free navigation. In natural unstructured environments, motion-based detection is challenging due to low signal to noise ratio. This paper describes our approach for a 14 km/h fast autonomous outdoor robot that is equipped with a Velodyne HDL-64E S2 for environment perception. We extend existing work that has proven reliable in urban environments. To overcome the unavailability of road network information for background separation, we introduce a foreground model that incorporates geometric as well as temporal cues. Local shape estimates successfully guide vehicle localization. Extensive evaluation shows that the system works reliable and efficient in various outdoor scenarios without any prior knowledge about the road network. Experiments with our own sensor as well as on publicly available data from the DARPA Urban Challenge revealed more than 96 % correctly identified vehicles.
  • Learning to Place New Objects Authors: Jiang, Yun; Zheng, Changxi; Lim, Marcus; Saxena, Ashutosh
    The ability to place objects in an environment is an important skill for a personal robot. An object should not only be placed stably, but should also be placed in its preferred location/orientation. For instance, it is preferred that a plate be inserted vertically into the slot of a dish-rack as compared to being placed horizontally in it. Unstructured environments such as homes have a large variety of object types as well as of placing areas. Therefore our algorithms should be able to handle placing new object types and new placing areas. These reasons make placing a challenging manipulation task. In this work, we propose using a supervised learning approach for finding good placements given point-clouds of the object and the placing area. Our method combines the features that capture support, stability and preferred configurations, and uses a shared sparsity structure in its the parameters. Even when neither the object nor the placing area is seen previously in the training set, our learning algorithm predicts good placements. In robotic experiments, our method enables the robot to stably place known objects with a 98% success rate and 98% when also considering semantically preferred orientations. In the case of placing a new object into a new placing area, the success rate is 82% and 72%.
  • Lost in Translation (and Rotation): Rapid Extrinsic Calibration for 2D and 3D LIDARs Authors: Maddern, William; Harrison, Alastair; Newman, Paul
    This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from Rényi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations.

Physical Human-Robot Interaction

  • Planning Body Gesture of Android for Multi-Person Human-Robot Interaction Authors: Kondo, Yutaka; Takemura, Kentaro; Takamatsu, Jun; Ogasawara, Tsukasa
    Natural body gesture, as well as speech dialog, is crucial for human-robot interaction and human-robot symbiosis. We have already proposed a real-time gesture planning method. In this paper, we afford this method more flexibility by adding motion parameterization function. Especially in multi-person HRI, this function becomes more important because of its adaptation to changes of a speaker’s and/or object’s locations. We implement our method for multi-person HRI system on the android Actroid-SIT, and conduct two experiments for estimating the precision of gestures and the human impressions about the Actroid. Through these experiments, we confirmed our method gives humans a more sophisticated impressions.
  • Variable Admittance Control of a Four-Degree-Of-Freedom Intelligent Assist Device Authors: Lecours, Alexandre; Mayer-St-Onge, Boris; Gosselin, Clement
    Robots are currently used in some applications to enhance human performance and it is expected that human/robot interactions will become more frequent in the future. In order to achieve effective human augmentation, the cooperation must be very intuitive to the human operator. This paper presents a variable admittance control approach to improve system intuitivity. The proposed variable admittance law is based on the inference of human intentions using desired velocity and acceleration. Stability issues are discussed and a controller design example is given. Finally, experimental results obtained with a full-scale prototype of an intelligent assist device are presented in order to demonstrate the performance of the algorithm.
  • Extraction of Latent Kinematic Relationships between Human Users and Assistive Robots Authors: Morimoto, Jun; Hyon, Sang-Ho
    In this study, we propose a control method for movement assistive robots using measured signals from human users. Some of the wearable assistive robots have mechanisms that can be adjusted to human kinematics (e.g., adjustable link length). However, since the human body has a complicated joint structure, it is generally difficult to design an assistive robot which mechanically well fits human users. We focus on the development of a control algorithm to generate corresponding movements of wearable assistive robots to that of human users even when the kinematic structures of the assistive robot and the human user are different. We first extract the latent kinematic relationship between a human user and the assistive robot. The extracted relationship is then used to control the assistive robot by converting human behavior into the corresponding joint angle trajectories of the robot. The proposed approach is evaluated by a simulated robot model and our newly developed exoskeleton robot, XoR.
  • Design & Personalization of a Cooperative Carrying Robot Controller Authors: Parker, Chris; Croft, Elizabeth
    In the near future, as robots become more advanced and affordable, we can envision their use as intelligent assistants in a variety of domains. An exemplar human-robot task identified in many previous works is cooperatively carrying a physically large object. An important task objective is to keep the carried object level. In this work, we propose an admittance-based controller that maintains a level orientation of a cooperatively carried object. The controller raises or lowers its end of the object with a human-like behavior in response to perturbations in the height of the other end of the object (e.g., the end supported by the human user). We also propose a novel tuning procedure, and find that most users are in close agreement about preferring a slightly under-damped controller response, even though they vary in their preferences regarding the speed of the controller's response.
  • Trust-Driven Interactive Visual Navigation for Autonomous Robots Authors: Xu, Anqi; Dudek, Gregory
    We describe a model of "trust" in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot's autonomous behaviors, in order to improve the efficiency of the collaborative team. We illustrate this trust-driven methodology through an interactive visual robot navigation system. This system is evaluated through controlled user experiments and a field demonstration using an aerial robot.
  • The 20-DOF Miniature Humanoid MH-2: A Wearable Communication System Authors: Tsumaki, Yuichi; Ono, Fumiaki; Tsukuda, Taisuke
    The 20-DOF miniature humanoid ``MH-2'' designed as a wearable telecommunicator, is a personal telerobot system. An operator can communicate with remote people through the robot. The robot acts as an avatar of the operator. To date, four prototypes of the wearable telecommunicator T1, T2, T3 and MH-1, have been developed as research platforms. MH-1 is also a miniature humanoid robot with 11-DOF for mutual telexistence. Although human-like appearance might be important for such communication systems, it is unable to achieve sophisticated gestures due to the lack of both wrist and body motions. In this paper, to tackle this problem, a 3-DOF parallel wire mechanism with novel wire arrangement for the wrist is introduced, while 3-DOF body motions are also adopted. Consequently, a 20-DOF miniature humanoid with dual 7-DOF arms has been designed and developed. Details of the concept and design are discussed, while fundamental experiments with a developed 7-DOF arm are also executed to confirm the mechanical properties.

Robotic Software, Programming Environments, and Frameworks

  • A Framework for Autonomous Self-Righting of a Generic Robot on Planar Surfaces Authors: Kessens, Chad C.; Smith, Daniel; Osteen, Philip
    During various acts, a robot may unintentionally tip over, rendering it unable to move normally. The ability to self-right and recover in such situations is crucial to mission completion and safe robot recovery. However, nearly all self-righting solutions to date are point solutions, each designed for a specific platform. As a first step toward a generic solution, this paper presents a framework for analyzing the self-righting capabilities of any generic robot on sloped planar surfaces. Based on the planar assumption, interactions with the ground can be modeled entirely using the robot’s convex hull. We begin by analyzing the stability of each robot orientation for all possible joint configurations. From this, we develop a configuration space map, defining stable state sets as nodes and the configurations where discontinuous state changes occur as transitions. Finally, we convert this map into a directed graph and assign costs to the transitions according to changes in potential energy between states. Based upon the ability to traverse this directed graph to the goal state, one can analyze a robot’s ability to self-right. To illustrate each step in our framework, we use a two-dimensional robot with a one degree of freedom arm, and then show a case study of iRobot’s Packbot. Ultimately, we project that this framework will be useful both for designing robots with the ability to self-right and for maximizing autonomous self-righting capabilities of fielded robots.
  • OpenFABMAP: An Open Source Toolbox for Appearance-Based Loop Closure Detection Authors: Glover, Arren; Maddern, William; Warren, Michael; Reid, Stephanie; Milford, Michael J; Wyeth, Gordon
    Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.
  • High-Resolution Depth Maps Based on TOF-Stereo Fusion Authors: Gandhi, Vineet; Cech, Jan; Horaud, Radu
    The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in varying robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which projects the TOF data onto the stereo image pair as an initial set of correspondences. These initial “seeds” are then propagated using a similarity score based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show t
  • RoboFrameNet: Verb-Centric Semantics for Actions in Robot Middleware Authors: Thomas, Brian; Jenkins, Odest Chadwicke
    Advancements in robotics have led to an ever-growing repertoire of software capabilities (e.g., recognition, mapping, and object manipulation). However, robotic capabilities grow, the complexity of operating and interacting with such robots increases (such as through speech, gesture, scripting, or programming). Language-based communication can offer users the ability to work with physically and computationally complex robots without diminishing the robot's inherent capability. However, it remains an open question how to build a common ground between natural language and goal-directed robot actions, particularly in a way that scales with the growth of robot capabilities. We examine using semantic frames -- a linguistics concept which describes scenes being acted out -- as a conceptual stepping stone between natural language and robot action. We examine the scalability of this solution through the development of RoboFrameNet, a generic language-to-action pipeline for ROS (the Robot Operating System) that abstracts verbs and their dependents into semantic frames, then grounds these frames into actions. We demonstrate the framework through experiments with the PR2 and Turtlebot robot platforms and consider the future scalability of the approach.
  • Building Occupancy Maps with a Mixture of Gaussian Processes Authors: Kim, Soohwan; Kim, Jonghyuk
    This paper proposes a new method for occupancy map building using a mixture of Gaussian processes. We consider occupancy maps as a binary classification problem of positions being occupied or not, and apply Gaussian processes. Particularly, since the computational complexity of Gaussian processes grows as O(n^3), where n is the number of data points, we divide the training data into small subsets and apply a mixture of Gaussian processes.The procedure of our map building method consists of three steps. First, we cluster acquired data by grouping laser hit points on the same line into the same cluster. Then, we build local occupancy maps by using Gaussian processes with clustered data. Finally, local occupancy maps are merged into one by using a mixture of Gaussian processes. Simulation results will be compared with previous researches and provided demonstrating the benefits of the approach.