Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact firstname.lastname@example.org
Embodied Inteligence - iCUB
Learning Reusable Task Components Using Hierarchical Activity Grammars with UncertaintiesWe present a novel learning method using activity grammars capable of learning reusable task components from a reasonably small number of samples under noisy conditions. Our linguistic approach aims to extract the hierarchical structure of activities which can be recursively applied to help recognize unforeseen, more complicated tasks that share the same underlying structures. To achieve this goal, our method 1) actively searches for frequently occurring action symbols that are subset of input samples to effectively discover the hierarchy, and 2) explicitly takes into account the uncertainty values associated with input symbols due to the noise inherent in low-level detectors. In addition to experimenting with a synthetic dataset to systematically analyze the algorithm's performance, we apply our method in human-led imitation learning environment where a robot learns reusable components of the task from short demonstrations to correctly imitate more complicated, longer demonstrations of the same task category. The results suggest that under reasonable amount of noise, our method is capable to capture the reusable structures of tasks and generalize to cope with recursions.
Stabilization for the Compliant Humanoid Robot COMAN Exploiting Intrinsic and Controlled ComplianceThe work presents the standing stabilization of a compliant humanoid robot against external force disturbances and variations of the terrain inclination. The novel contribution is the proposed control scheme which consists of three strategies named compliance control in the transversal plane, body attitude control, and potential energy control, all combined with the intrinsic passive compliance in the robot. The physical compliant elements of the robot are exploited to react at the first instance of the impact while the active compliance control is applied to further absorb the impact and dissipate the elastic energy stored in springs preventing the high rate of spring recoil. The body attitude controller meanwhile regulates the spin angular momentum to provide more agile reactions by changing body inclination. The potential energy control module constrains the robot center of mass (COM) in a virtual slope to convert the excessive kinetic energy into potential energy to prevent falling. Experiments were carried out with the proposed balance stabilization control demonstrating superior balance performance. The compliant humanoid was capable of recovering from external force disturbances and moderate or even abrupt variations of the terrain inclination. Experimental data such as the impulse forces, real COM, center of pressure (COP) and the spring elastic energy are presented and analyzed.
Efficient Human-Like Walking for the COmpliant Humanoid COMAN Based on Kinematic Motion Primitives (kMPs)Research in humanoid robotics in recent years has led to significant advances in terms of the ability to walk and even run. Yet, despite the general achievements in locomotion and control, energy efficiency is still one important area that requires further attention, especially as it is one of the major steeping stones leading to increased autonomy. This paper examines, and quantifies, the energetic benefits of introducing passive compliance into bipedal locomotion using COMAN, an intrinsically COmpliant huMANoid robot. The novelty of the method proposed consists of: i) the use of a method of gait synthesis based on kinematic Motion Primitives (kMPs) extracted from human, ii) the frequency tuning of the resultant trajectories, to excite the physical elasticity of the system, and the subsequent analysis of the energetic performance of the robot. The motivation is to assess the possible effects of using dynamic human-like, and human derived, trajectories, with significant Center of Mass (CoM) vertical displacement, regulated in frequency around the frequency band of the system resonances, on the excitation of the compliant actuators, and subsequently to measure and verify any energetic benefit. Experimental results show that if the gait frequency is close to one of the main resonant frequencies of the robot, then the total work contribution of the elastic compliant element to the overall motion of the robot is positive (15% of the work required is generated by the springs).
Closed-loop primitives: A method to generate and recognize reaching actions from demonstrationThe studies on mirror neurons observed in monkeys indicate that recognition of otherâ€™s actions activates neural circuits that are also responsible for generating the very same actions in the animal. The mirror neuron hypothesis argues that such an overlap between action generation and recognition can provide a shared worldview among individuals and be a key pillar for communication. Inspired by these findings, this paper extends a learning by demonstration method for online recognition of observed actions. The proposed method is shown to recognize and generate different reaching actions demonstrated by a human on a humanoid robot platform. Experiments show that the proposed method is robust to both occlusions during the observed actions as well as variances in the speed of the observed actions. The results are successfully demonstrated in an interactive game with the iCub humanoid robot platform.
Active Object Recognition on a Humanoid RobotInteraction with its environment is a key requisite for a humanoid robot. Especially the ability to recognize and manipulate unknown objects is crucial to successfully work in natural environments. Visual object recognition, however, still remains a challenging problem, as three-dimensional objects often give rise to ambiguous, two-dimensional views. Here, we propose a perception-driven, multisensory exploration and recognition scheme to actively resolve ambiguities that emerge at certain viewpoints. We define an efficient method to acquire two-dimensional views in an object-centered task space and sample characteristic views on a view sphere. Information is accumulated during the recognition process and used to select actions expected to be most beneficial in discriminating similar objects. Besides visual information we take into account proprioceptive information to create more reliable hypotheses. Simulation and real-world results clearly demonstrate the efficiency of active, multisensory exploration over passive, vision-only recognition methods.
Imitation Learning of Non-Linear Point-To-Point Robot Motions Using Dirichlet ProcessesIn this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non- linear dynamic robot movement model from a small number of observations. The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach to use an infinite Gaussian mixture model (IGMM) which does not have this limitation. Instead, the IGMM automatically finds the number of mixtures that are necessary to reflect the data complexity. For use in the context of a non-linear dynamic model, we develop a Constrained IGMM (CIGMM). We validate our algorithm on the same data that was used in , where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human demonstrator.