List of all recorded talks

  • Video Summary of D.R.O.P. the Durable Reconnaissance and Observation Platform Authors: McKenzie, Clifford; Parness, Aaron
    This video introduces a small, new reconnaissance robot that can climb concrete surfaces up to 85 degrees at a rate of 25cm/s, make rapid horizontal to vertical transitions, carry an audio/visual payload, and survive impacts from 3m. The robot can travel over 45 cm/s on flat ground, and turn in place. The Durable Reconnaissance and Observation Platform, D.R.O.P., is manufactured using a combination of selective laser sintering (SLS) and shape deposition manufacturing (SDM) techniques. The enabling feature of DROP is the use of microspines in a rotary configuration, increasing climbing and walking speed over previous microspine-based robots by more than 5x.
  • Worms, Waves and Robots Authors: Boxerbaum, Alexander; Horchler, Andrew; Shaw, Kendrick; Chiel, Hillel; Quinn, Roger, D.
    The Biologically Inspired Robotics group at Case Western Reserve University has developed several innovative designs for a new kind of robot that uses peristalsis, the method of locomotion used by earthworms. Unlike previous wormlike robots, our concept uses a continuously deformable outer mesh that interpolates the body position between discrete actuators. Here, we summarize our progress with this soft hyper-redundant robot.
  • Capture, Recognition and Imitation of Anthropomorphic Motion Authors: hak, sovannara; Mansard, Nicolas; Ramos, Oscar E.; Saab, Layale; Stasse, Olivier
    We present our works in generation, recognition and editing of anthropomorphic motion using the stack of tasks framework. It is based on the task function formalism classically used for motion generation. The task spaces are suitable to perform motion analysis and task recognition because the tasks are described in those spaces. The reference behaviors are originated from human trajectories. Specific tasks are then integrated to retarget and to edit the reference motion in order to respect the dynamic constraints, the limits of the robot and the general aspect.
  • Automated Biomanipulation of Single Cells Authors: Steager, Edward; Sakar, Mahmut Selman; Magee, Ceridwen; Kennedy III, Monroe; Cowley, Anthony; Kumar, Vijay
    Transport of individual cells or chemical payloads on a subcellular scale is an enabling tool for the study of cellular communication, cell migration, and other localized phenomena. We present a magnetically actuated robotic system for the fully automated manipulation of cells and microbeads. Our strategy uses autofluorescent robotic transporters and fluorescently labeled microbeads to aid tracking and control in optically obstructed environments. We demonstrate automated delivery of microbeads infused with chemicals to specified positions on neurons.
  • Correct High-Level Robot Control from Structured English Authors: Jing, Gangyuan; Finucane, Cameron; Raman, Vasumathi; Kress-Gazit, Hadas
    The Linear Temporal Logic MissiOn Planning (LTLMoP) toolkit is a software package designed to generate a controller that guarantees a robot satisfies a task specification written by the user in structured English. The controller can be implemented on either a simulated or physical robot. This video illustrates the use of LTLMoP to generate a correct-by-construction robot controller. Here, an Aldebaran Nao humanoid robot carries out tasks as a worker in a simplified grocery store scenario.
  • Learning to Place Objects: Organizing a Room Authors: Basu, Gaurab; Jiang, Yun; Saxena, Ashutosh
    In this video, we consider the task of a personal robot organizing a room by placing objects stably as well as in semantically preferred locations. While this includes many sub-tasks such as grasping the objects, moving to a placing position, localizing itself and placing the object in a proper location and orientation, it is the last problem-how and where to place objects-is our focus in this work and has not been widely studied yet. We formulate the placing task as a learning problem. By computing appearance and shape features from the input (point clouds) that can capture stability and semantics, our algorithm can identify good placements for multiple objects. In this video, we put together the placing algorithm with other sub-tasks to enable a robot organizing a room in several scenarios, such as loading a bookshelf, a fridge, a waste bin and blackboard with various objects.
  • Demonstrations of Gravity-Independent Mobility and Drilling on Natural Rock Using Microspines Authors: Parness, Aaron; Frost, Matthew; King, Jonathan; Thatte, Nitish
    The video presents microspine-based anchors being developed for gripping rocks on the surfaces of comets and asteroids, or for use on cliff faces and lava tubes on Mars. Two types of anchor prototypes are shown on supporting forces in all directions away from the rock; >160 N tangent, >150 N at 45, and >180 N normal to the surface of the rock. A compliant robotic ankle with two active degrees of freedom interfaces these anchors to the Lemur IIB robot for future climbing trials. Finally, a rotary percussive drill is shown coring into rock regardless of gravitational orientation. As a harder-than-zero-g proof of concept, inverted drilling was performed creating 20mm diameter boreholes 83 mm deep in vesicular basalt samples while retaining 12 mm diameter rock cores in 3-6 pieces.
  • Creating and Using RoboEarth Object Models Authors: Di Marco, Daniel; Koch, Andreas; Zweigle, Oliver; Häussermann, Kai; Schießle, Björn; Levi, Paul; Galvez Lopez, Dorian; Riazuelo, Luis; Civera, Javier; Montiel, J.M.M; Tenorth, Moritz; Perzylo, Alexander Clifford; Waibel, Markus
    This work introduces a way to build up and use an extensive sensor-independent object model database. In a first step, a cost-effective and computationally cheap way to create colored point cloud models from common household objects by using a Microsoft Kinect camera is presented. Those object models are stored in a world-wide accessible, distributed database called RoboEarth. Finally, the models are used for recognizing the corresponding objects with any kind of camera. In the presented implementation the demonstration was done with both a Kinect and common RGB cameras.
  • Dexterous Manipulation with Underactuated Fingers: Flip-And-Pinch Task Authors: Odhner, Lael; Ma, Raymond; Dollar, Aaron
    This video demonstrates the use of an underactuated robotic hand modified for the flip-and-pinch task to pick up thin objects from a table surface. Though well-suited for power-grasping, underactuated hands have difficulty with pinch-grasping and precision motions. We introduce a repeatable and robust method by which an underactuated hand flips thin objects off the table into a stable pinch grasp. We explain why this task is quasi-static and robust for a wide range of object dimensions.
  • Beyond Classical Teleoperation: Assistance, Cooperation, Data Reduction, and Spatial Audio Authors: Schauß, Thomas; Passenberg, Carolina; Stefanov, Nikolay; Feth, Daniela; Vittorias, Iason; Peer, Angelika; Hirche, Sandra; Buss, Martin; Rothbucher, Martin; Diepold, Klaus; Kammerl, Julius; Steinbach, Eckehard
    In this video we present a teleoperation system which is capable of solving complex tasks in human-sized wide area environments. The system consists of two mobile teleoperators controlled by two operators, and offers haptic, visual, and auditory feedback. The task examined here, consists of repairing a robot by removing a computer and replacing a defective hard-drive. To cope with the complexity of such a task, we go beyond classical teleoperation by integrating several advanced software algorithms into the system.