List of all recorded talks

  • Using Manipulation Primitives for Brick Sorting in Clutter Authors: Gupta, Megha; Sukhatme, Gaurav
    This paper explores the idea of manipulation-aided perception and grasping in the context of sorting small objects on a tabletop. We present a robust pipeline that combines perception and manipulation to accurately sort Duplo bricks by color and size. The pipeline uses two simple motion primitives to manipulate the scene in ways that help the robot to improve its perception. This results in the ability to sort cluttered piles of Duplo bricks accurately. We present experimental results on the PR2 robot comparing brick sorting without the aid of manipulation to sorting with manipulation primitives that show the benefits of the latter, particularly as the degree of clutter in the environment increases.
  • A constraint-based programming approach to physical human-robot Interaction Authors: Borghesan, Gianni; Willaert, Bert; De Schutter, Joris
    Abstract— This work aims to extend the constraint-based formalism iTaSC for scenarios where physical human-robot interaction plays a central role, which is the case for e.g. surgical robotics, rehabilitation robotics and household robotics. To really exploit the potential of robots in these scenarios, it should be possible to enforce force and geometrical constraints in an easy and flexible way. iTaSC allows to express such constraints in different frames expressed in arbitrary spaces and to obtain control setpoints in a systematic way. In previous implementations of iTaSC, industrial velocity-controlled robots were considered. This work presents an extension of the iTaSC-framework that allows to take advantage of the back-drivability of a robot thus avoiding the use of force sensors. Then, as a case-study, the iTaSC-framework is used to formulate a (position-position) teleoperation scheme. The theoretical findings are experimentally validated using a PR2 robot.
  • Planning Body Gesture of Android for Multi-Person Human-Robot Interaction Authors: Kondo, Yutaka; Takemura, Kentaro; Takamatsu, Jun; Ogasawara, Tsukasa
    Natural body gesture, as well as speech dialog, is crucial for human-robot interaction and human-robot symbiosis. We have already proposed a real-time gesture planning method. In this paper, we afford this method more flexibility by adding motion parameterization function. Especially in multi-person HRI, this function becomes more important because of its adaptation to changes of a speaker’s and/or object’s locations. We implement our method for multi-person HRI system on the android Actroid-SIT, and conduct two experiments for estimating the precision of gestures and the human impressions about the Actroid. Through these experiments, we confirmed our method gives humans a more sophisticated impressions.
  • Variable Admittance Control of a Four-Degree-Of-Freedom Intelligent Assist Device Authors: Lecours, Alexandre; Mayer-St-Onge, Boris; Gosselin, Clement
    Robots are currently used in some applications to enhance human performance and it is expected that human/robot interactions will become more frequent in the future. In order to achieve effective human augmentation, the cooperation must be very intuitive to the human operator. This paper presents a variable admittance control approach to improve system intuitivity. The proposed variable admittance law is based on the inference of human intentions using desired velocity and acceleration. Stability issues are discussed and a controller design example is given. Finally, experimental results obtained with a full-scale prototype of an intelligent assist device are presented in order to demonstrate the performance of the algorithm.
  • Extraction of Latent Kinematic Relationships between Human Users and Assistive Robots Authors: Morimoto, Jun; Hyon, Sang-Ho
    In this study, we propose a control method for movement assistive robots using measured signals from human users. Some of the wearable assistive robots have mechanisms that can be adjusted to human kinematics (e.g., adjustable link length). However, since the human body has a complicated joint structure, it is generally difficult to design an assistive robot which mechanically well fits human users. We focus on the development of a control algorithm to generate corresponding movements of wearable assistive robots to that of human users even when the kinematic structures of the assistive robot and the human user are different. We first extract the latent kinematic relationship between a human user and the assistive robot. The extracted relationship is then used to control the assistive robot by converting human behavior into the corresponding joint angle trajectories of the robot. The proposed approach is evaluated by a simulated robot model and our newly developed exoskeleton robot, XoR.
  • Design & Personalization of a Cooperative Carrying Robot Controller Authors: Parker, Chris; Croft, Elizabeth
    In the near future, as robots become more advanced and affordable, we can envision their use as intelligent assistants in a variety of domains. An exemplar human-robot task identified in many previous works is cooperatively carrying a physically large object. An important task objective is to keep the carried object level. In this work, we propose an admittance-based controller that maintains a level orientation of a cooperatively carried object. The controller raises or lowers its end of the object with a human-like behavior in response to perturbations in the height of the other end of the object (e.g., the end supported by the human user). We also propose a novel tuning procedure, and find that most users are in close agreement about preferring a slightly under-damped controller response, even though they vary in their preferences regarding the speed of the controller's response.
  • Trust-Driven Interactive Visual Navigation for Autonomous Robots Authors: Xu, Anqi; Dudek, Gregory
    We describe a model of "trust" in human-robot systems that is inferred from their interactions, and inspired by similar concepts relating to trust among humans. This computable quantity allows a robot to estimate the extent to which its performance is consistent with a human’s expectations, with respect to task demands. Our trust model drives an adaptive mechanism that dynamically adjusts the robot's autonomous behaviors, in order to improve the efficiency of the collaborative team. We illustrate this trust-driven methodology through an interactive visual robot navigation system. This system is evaluated through controlled user experiments and a field demonstration using an aerial robot.
  • The 20-DOF Miniature Humanoid MH-2: A Wearable Communication System Authors: Tsumaki, Yuichi; Ono, Fumiaki; Tsukuda, Taisuke
    The 20-DOF miniature humanoid ``MH-2'' designed as a wearable telecommunicator, is a personal telerobot system. An operator can communicate with remote people through the robot. The robot acts as an avatar of the operator. To date, four prototypes of the wearable telecommunicator T1, T2, T3 and MH-1, have been developed as research platforms. MH-1 is also a miniature humanoid robot with 11-DOF for mutual telexistence. Although human-like appearance might be important for such communication systems, it is unable to achieve sophisticated gestures due to the lack of both wrist and body motions. In this paper, to tackle this problem, a 3-DOF parallel wire mechanism with novel wire arrangement for the wrist is introduced, while 3-DOF body motions are also adopted. Consequently, a 20-DOF miniature humanoid with dual 7-DOF arms has been designed and developed. Details of the concept and design are discussed, while fundamental experiments with a developed 7-DOF arm are also executed to confirm the mechanical properties.
  • Automatic Camera and Range Sensor Calibration using a single Shot Authors: Geiger, Andreas; Moosmann, Frank; Car, Ömer; Schuster, Bernhard
    As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions.
  • Scale-Only Visual Homing from an Omnidirectional Camera Authors: Pradalier, Cedric; Liu, Ming; Pomerleau, Francois; Siegwart, Roland
    Visual Homing is the process by which a mobile robot moves to a Home position using only information extracted from visual data. The approach we present in this paper uses image keypoints (e.g. SIFT) extracted from omnidirectional images and matches the current set of keypoints with the set recorded at the Home location. In this paper, we first formulate three different visual homing problems using uncalibrated omnidirectional camera within the Image Based Visual Servoing (IBVS) framework; then we propose a novel simplified homing approach, which is inspired by IBVS, based only on the scale information of the SIFT features, with its computational cost linear to the number of features. This paper reports on the application of our method on a commonly cited indoor database where it outperforms other approaches. We also briefly present results on a real robot and allude on the integration into a topological navigation framework.