TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Robotic Software, Programming Environments, and Frameworks

  • A Framework for Autonomous Self-Righting of a Generic Robot on Planar Surfaces Authors: Kessens, Chad C.; Smith, Daniel; Osteen, Philip
    During various acts, a robot may unintentionally tip over, rendering it unable to move normally. The ability to self-right and recover in such situations is crucial to mission completion and safe robot recovery. However, nearly all self-righting solutions to date are point solutions, each designed for a specific platform. As a first step toward a generic solution, this paper presents a framework for analyzing the self-righting capabilities of any generic robot on sloped planar surfaces. Based on the planar assumption, interactions with the ground can be modeled entirely using the robot’s convex hull. We begin by analyzing the stability of each robot orientation for all possible joint configurations. From this, we develop a configuration space map, defining stable state sets as nodes and the configurations where discontinuous state changes occur as transitions. Finally, we convert this map into a directed graph and assign costs to the transitions according to changes in potential energy between states. Based upon the ability to traverse this directed graph to the goal state, one can analyze a robot’s ability to self-right. To illustrate each step in our framework, we use a two-dimensional robot with a one degree of freedom arm, and then show a case study of iRobot’s Packbot. Ultimately, we project that this framework will be useful both for designing robots with the ability to self-right and for maximizing autonomous self-righting capabilities of fielded robots.
  • OpenFABMAP: An Open Source Toolbox for Appearance-Based Loop Closure Detection Authors: Glover, Arren; Maddern, William; Warren, Michael; Reid, Stephanie; Milford, Michael J; Wyeth, Gordon
    Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAP’s application in a highly varied range of robotics research scenarios.
  • High-Resolution Depth Maps Based on TOF-Stereo Fusion Authors: Gandhi, Vineet; Cech, Jan; Horaud, Radu
    The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in varying robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which projects the TOF data onto the stereo image pair as an initial set of correspondences. These initial “seeds” are then propagated using a similarity score based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show t
  • RoboFrameNet: Verb-Centric Semantics for Actions in Robot Middleware Authors: Thomas, Brian; Jenkins, Odest Chadwicke
    Advancements in robotics have led to an ever-growing repertoire of software capabilities (e.g., recognition, mapping, and object manipulation). However, robotic capabilities grow, the complexity of operating and interacting with such robots increases (such as through speech, gesture, scripting, or programming). Language-based communication can offer users the ability to work with physically and computationally complex robots without diminishing the robot's inherent capability. However, it remains an open question how to build a common ground between natural language and goal-directed robot actions, particularly in a way that scales with the growth of robot capabilities. We examine using semantic frames -- a linguistics concept which describes scenes being acted out -- as a conceptual stepping stone between natural language and robot action. We examine the scalability of this solution through the development of RoboFrameNet, a generic language-to-action pipeline for ROS (the Robot Operating System) that abstracts verbs and their dependents into semantic frames, then grounds these frames into actions. We demonstrate the framework through experiments with the PR2 and Turtlebot robot platforms and consider the future scalability of the approach.
  • Building Occupancy Maps with a Mixture of Gaussian Processes Authors: Kim, Soohwan; Kim, Jonghyuk
    This paper proposes a new method for occupancy map building using a mixture of Gaussian processes. We consider occupancy maps as a binary classification problem of positions being occupied or not, and apply Gaussian processes. Particularly, since the computational complexity of Gaussian processes grows as O(n^3), where n is the number of data points, we divide the training data into small subsets and apply a mixture of Gaussian processes.The procedure of our map building method consists of three steps. First, we cluster acquired data by grouping laser hit points on the same line into the same cluster. Then, we build local occupancy maps by using Gaussian processes with clustered data. Finally, local occupancy maps are merged into one by using a mixture of Gaussian processes. Simulation results will be compared with previous researches and provided demonstrating the benefits of the approach.