Technical session talks from ICRA 2012
TechTalks from event: Technical session talks from ICRA 2012
Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv
Physical Human-Robot Interaction
-
Planning Body Gesture of Android for Multi-Person Human-Robot InteractionNatural body gesture, as well as speech dialog, is crucial for human-robot interaction and human-robot symbiosis. We have already proposed a real-time gesture planning method. In this paper, we afford this method more flexibility by adding motion parameterization function. Especially in multi-person HRI, this function becomes more important because of its adaptation to changes of a speaker’s and/or object’s locations. We implement our method for multi-person HRI system on the android Actroid-SIT, and conduct two experiments for estimating the precision of gestures and the human impressions about the Actroid. Through these experiments, we confirmed our method gives humans a more sophisticated impressions.
-
Variable Admittance Control of a Four-Degree-Of-Freedom Intelligent Assist DeviceRobots are currently used in some applications to enhance human performance and it is expected that human/robot interactions will become more frequent in the future. In order to achieve effective human augmentation, the cooperation must be very intuitive to the human operator. This paper presents a variable admittance control approach to improve system intuitivity. The proposed variable admittance law is based on the inference of human intentions using desired velocity and acceleration. Stability issues are discussed and a controller design example is given. Finally, experimental results obtained with a full-scale prototype of an intelligent assist device are presented in order to demonstrate the performance of the algorithm.
-
Extraction of Latent Kinematic Relationships between Human Users and Assistive RobotsIn this study, we propose a control method for movement assistive robots using measured signals from human users. Some of the wearable assistive robots have mechanisms that can be adjusted to human kinematics (e.g., adjustable link length). However, since the human body has a complicated joint structure, it is generally difficult to design an assistive robot which mechanically well fits human users. We focus on the development of a control algorithm to generate corresponding movements of wearable assistive robots to that of human users even when the kinematic structures of the assistive robot and the human user are different. We first extract the latent kinematic relationship between a human user and the assistive robot. The extracted relationship is then used to control the assistive robot by converting human behavior into the corresponding joint angle trajectories of the robot. The proposed approach is evaluated by a simulated robot model and our newly developed exoskeleton robot, XoR.
-
Design & Personalization of a Cooperative Carrying Robot ControllerIn the near future, as robots become more advanced and affordable, we can envision their use as intelligent assistants in a variety of domains. An exemplar human-robot task identified in many previous works is cooperatively carrying a physically large object. An important task objective is to keep the carried object level. In this work, we propose an admittance-based controller that maintains a level orientation of a cooperatively carried object. The controller raises or lowers its end of the object with a human-like behavior in response to perturbations in the height of the other end of the object (e.g., the end supported by the human user). We also propose a novel tuning procedure, and find that most users are in close agreement about preferring a slightly under-damped controller response, even though they vary in their preferences regarding the speed of the controller's response.
-
Trust-Driven Interactive Visual Navigation for Autonomous RobotsWe describe a model of "trust" in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot's autonomous behaviors, in order to improve the efficiency of the collaborative team. We illustrate this trust-driven methodology through an interactive visual robot navigation system. This system is evaluated through controlled user experiments and a field demonstration using an aerial robot.
-
The 20-DOF Miniature Humanoid MH-2: A Wearable Communication SystemThe 20-DOF miniature humanoid ``MH-2'' designed as a wearable telecommunicator, is a personal telerobot system. An operator can communicate with remote people through the robot. The robot acts as an avatar of the operator. To date, four prototypes of the wearable telecommunicator T1, T2, T3 and MH-1, have been developed as research platforms. MH-1 is also a miniature humanoid robot with 11-DOF for mutual telexistence. Although human-like appearance might be important for such communication systems, it is unable to achieve sophisticated gestures due to the lack of both wrist and body motions. In this paper, to tackle this problem, a 3-DOF parallel wire mechanism with novel wire arrangement for the wrist is introduced, while 3-DOF body motions are also adopted. Consequently, a 20-DOF miniature humanoid with dual 7-DOF arms has been designed and developed. Details of the concept and design are discussed, while fundamental experiments with a developed 7-DOF arm are also executed to confirm the mechanical properties.
- All Sessions
- 3D Surface Models, Point Cloud Processing
- Needle Steering
- Networked Robots
- Grasping and Manipulation
- Motion Planning II
- Estimation and Control for UAVs
- Multi Robots: Task Allocation
- Localization
- Perception for Autonomous Vehicles
- Rehabilitation Robotics
- Modular Robots & Multi-Agent Systems
- Mechanism Design of Mobile Robots
- Bipedal Robot Control
- Navigation and Visual Sensing
- Autonomy and Vision for UAVs
- RGB-D Localization and Mapping
- Micro and Nano Robots II
- Embodied Intelligence - Complient Actuators
- Grasping: Modeling, Analysis and Planning
- Learning and Adaptive Control of Robotic Systems I
- Marine Robotics I
- Animation & Simulation
- Planning and Navigation of Biped Walking
- Sensing for manipulation
- Sampling-Based Motion Planning
- Minimally Invasive Interventions II
- Biologically Inspired Robotics II
- Underactuated Robots
- Semiconductor Manufacturing
- Haptics
- Learning and Adaptation Control of Robotic Systems II
- Parts Handling and Manipulation
- Space Robotics
- Stochastic in Robotics and Biological Systems
- Path Planning and Navigation
- Biomimetics
- Micro - Nanoscale Automation
- Multi-Legged Robots
- Localization II
- Results of ICRA 2011 Robot Challenge
- Teleoperation
- Applied Machine Learning
- Hand Modeling and Control
- Multi-Robot Systems 1
- Medical Robotics I
- Micro/Nanoscale Automation II
- Visual Learning
- Continuum Robots
- Robust and Adaptive Control of Robotic Systems
- High Level Robot Behaviors
- Biologically Inspired Robotics
- Novel Robot Designs
- Compliance Devices and Control
- Video Session
- AI Reasoning Methods
- Redundant robots
- Localization and Mapping
- Climbing Robots
- Embodied Inteligence - iCUB
- Underactuated Grasping
- Data Based Learning
- Range Imaging
- Collision
- Industrial Robotics
- Human Detection and Tracking
- Trajectory Planning and Generation
- Stochastic Motion Planning
- Medical Robotics II
- Vision-Based Attention and Interaction
- Control and Planning for UAVs
- Embodied Soft Robots
- Mapping
- SLAM I
- Image-Guided Interventions
- Novel Actuation Technologies
- Micro/Nanoscale Automation III
- Human Like Biped Locamotion
- Marine Robotics II
- Force & Tactile Sensors
- Motion Path Planning I
- Mobile Manipulation: Planning & Control
- Simulation and Search in Grasping
- Control of UAVs
- Grasp Planning
- Humanoid Motion Planning and Control
- Surveillance
- Environment Mapping
- Octopus-Inspired Robotics
- Soft Tissue Interaction
- Pose Estimation
- Cable-Driven Mechanisms
- Parallel Robots
- SLAM II
- Intelligent Manipulation Grasping
- Formal Methods
- Sensor Networks
- Force, Torque and Contacts in Grasping and Assembly
- Hybrid Legged Robots
- Visual Tracking
- Physical Human-Robot Interaction
- Robotic Software, Programming Environments, and Frameworks
- Minimally invasive interventions I
- Multi-Robot Systems II
- Grasping: Learning and Estimation
- Non-Holonomic Motion Planning
- Calibration and Identification
- Compliant Nanopositioning
- Micro and Nano Robots I