TechTalks from event: Technical session talks from ICRA 2012

Conference registration code to access these videos can be accessed by visiting this link: PaperPlaza. Step-by-step to access these videos are here: step-by-step process .
Why some of the videos are missing? If you had provided your consent form for your video to be published and still it is missing, please contact support@techtalks.tv

Bipedal Robot Control

  • Switching Control Design for Accommodating Large Step-Down Disturbances in Bipedal Robot Walking Authors: Park, Hae Won; Sreenath, Koushil; Ramezani, Alireza; Grizzle, J.W
    This paper presents a feedback controller that allows MABEL, a kneed, planar bipedal robot, with 1 m-long legs, to accommodate an abrupt 20 cm decrease in ground height. The robot is provided information on neither where the step down occurs, nor by how much. After the robot has stepped off a raised platform, however, the height of the platform can be estimated from the lengths of the legs and the angles of the robot’s joints. A real-time control strategy is implemented that uses this on-line estimate of step-down height to switch from a baseline controller, that is designed for flat-ground walking, to a second controller, that is designed to attenuate torso oscillation resulting from the step-down disturbance. After one step, the baseline controller is re-applied. The control strategy is developed on a simplified model of the robot and then verified on a more realistic model before being evaluated experimentally. The paper concludes with experimental results showing MABEL (blindly) stepping off a 20 cm high platform.
  • Design and Experimental Implementation of a Compliant Hybrid Zero Dynamics Controller with Active Force Control for Running on MABEL Authors: Sreenath, Koushil; Park, Hae Won; Grizzle, J.W
    This paper presents a control design based on the method of virtual constraints and hybrid zero dynamics to achieve stable running on MABEL, a planar biped with compliance. In particular, a time-invariant feedback controller is designed such that the closed-loop system not only respects the natural compliance of the open-loop system, but also enables active force control within the compliant hybrid zero dynamics and results in exponentially stable running gaits. The compliant-hybrid-zero-dynamics-based controller with active force control is implemented experimentally and shown to realize stable running gaits on MABEL at an average speed of 1.95 m/s (4.4 mph) and a peak speed of 3.06 m/s (6.8 mph). The obtained gait has flight phases upto 39% of the gait, and an estimated ground clearance of 7.5-10 cm.
  • Walking Control Strategy for Biped Robots Based on Central Pattern Generator Authors: Liu, Chengju; Chen, Qijun
    This paper deals with the walking control of biped robots inspired by biological concept of central pattern generator (CPG). A control architecture is proposed with a trajectory generator and a motion engine. The trajectory generator consists of a CoG (center of gravity) trajectory generator and a foot trajectory modulator. The CoG generator generates adaptive CoG trajectories online and the foot trajectories can be modulated based on the generated CoG trajectories. A biped platform NAO is used to validate the proposed locomotion control system. The experimental results confirm the effectiveness of the proposed control architecture.
  • On the Lyapunov Stability of Quasistatic Planar Biped Robots Authors: Varkonyi, Peter L.; Gontier, David; Burdick, Joel
    We investigate the local motion of a planar rigid body with unilateral constraints in the neighborhood of a two-contact frictional equilibrium configuration on a slope. A new sufficient condition of Lyapunov stability is developed in the presence of arbitrary external forces. Additionally, we construct an example, which is stable against perturbations by infinitesimal forces, but does not possess Lyapunov stability against infinitesimal displacements or impulses. The great difference between previous stability criteria and ours leads to further questions about the nature of the exact stability condition.
  • Humanoid Robot Safe Fall Using Aldebaran NAO Authors: Yun, Seung-kook; Goswami, Ambarish
    Although the fall of a humanoid robot is rare in controlled environments, it cannot be avoided in the real world where the robot may physically interact with the environment. Our earlier work introduced the strategy of direction changing fall, in which the robot attempts to reduce the chance of human injury by changing its default fall direction in realtime and falling in a safer direction. The current paper reports further theoretical developments culminating in a successful hardware implementation of this fall strategy conducted on the Aldebaran NAO robot[3]. This includes new algorithms for humanoid kinematics and Jacobians involving coupled joints and a complete estimation of the body frame attitude using an additional inertial measurement unit. Simulations and experiments are smoothly handled by our platform independent humanoid control software called Locomote. We report experiment scenarios where we demonstrate the effectiveness of the proposed strategies in changing the fall direction.
  • Control Design to Achieve Dynamic Walking on a Bipedal Robot with Compliance Authors: Lim, Bokman; Lee, Minhyung; Kim, Joohyung; Lee, Jusuk; Park, Jaeho; Seo, Keehong; Roh, Kyungshik
    We propose a control framework for dynamic bipedal locomotion with compliant joints. A novel 3D dynamic walking is achieved by utilizing natural dynamics of the system. It is done by 1) driving robot joints directly with the posture-based state machine and 2) controlling tendon-driven compliant actuators. To enlarge gait's basin attraction for stable walking, we also adaptively plan step-to-step motion and compensate stance/swing motion. Final joint input is described by a superposition of state machine control torques and compensation torques of balancers. Various walking styles are easily generated by composing straight and turning gait-primitives and such walking is effectively able to adapt on various environments. Our proposed method is applied to a torque controlled robot platform, Roboray. Experimental results show that gaits are able to traverse inclined and rough terrains with bounded variations, and the result gaits are human-like comparing the conventional knee bent walkers.

Navigation and Visual Sensing

  • Navigation among Visually Connected Sets of Partially Distinguishable Landmarks Authors: Erickson, Lawrence H; LaValle, Steven M
    A robot navigates in a polygonal region populated by a set of partially distinguishable landmarks. The robot's motion primitives consist of actions of the form ``drive toward a landmark of class x''. To effectively navigate, the robot must always be able to see a landmark. Also, if the robot sees two landmarks of the same class, its motion primitives become ambiguous. Finally, if the robot wishes to navigate from landmark s_0 to landmark s_{goal} with a simple graph search algorithm, then there must be a sequence of landmarks [s_0,s_1,s_2,...,s_k=s_{goal}], in which landmark s_i is visible from s_{i-1}. Given these three conditions, how many landmark classes are required for navigation in a given polygon P? We call this minimum number of landmark classes the connected landmark class number, denoted chi_{CL}(P). We study this problem for the monotone polygons, an important family of polygons that are frequently generated as intermediate steps in other decomposition algorithms. We demonstrate that for all odd k, there exists a monotone polygon M_k with (3/4)(k^2+2k+1) vertices such that chi_{CL}(P) geq k. We also demonstrate that for any n-vertex monotone polygon P, chi_{CL}(P) leq n/3+12.
  • Natural Landmark Extraction in Cluttered Forested Environments Authors: Song, Meng; Sun, Fengchi; Iagnemma, Karl
    In this paper, a new systematical method for extracting tree trunk landmarks from 3D point clouds of cluttered forested environments is proposed. This purely geometric method is established on scene understanding and automatic analysis of trees. The pipeline of our method includes three steps. First, the raw point clouds are segmented by utilizing the circular shape of trees, and segments are grouped into tree sections based on the principle of spatial proximity. Second, circles and axes are extracted from tree sections which are subject to loss of shape information. Third, by clustering and integrating the tree sections resulted from various space inconsistencies, straight tree trunk landmarks are finally formed for future localization. The experimental results from real forested environments are presented.
  • Rapid Vanishing Point Estimation for General Road Detection Authors: Miksik, Ondrej
    This paper deals with fast vanishing point estimation for autonomous robot navigation. Preceding approaches showed suitable results and vanishing point estimation was used in many robotics tasks, especially in the detection of ill-structured roads. The main drawback of such approaches is the computational complexity - the possibilities of hardware accelerations are mentioned in many papers, however, we believe, that the biggest benefit of a vanishing point estimation algorithm is for primarily tele-operated robots in the case of signal loss, etc., that cannot use specialized hardware just for this feature. In this paper, we investigate possibilities of an efficient implementation by the expansion of Gabor wavelets into a linear combination of Haar-like box functions to perform fast filtering via integral image trick and discuss the utilization of superpixels in the voting scheme to provide a significant speed-up (more than 40 times), while we loose only 3-5% in precision.
  • A New Tentacles-Based Technique for Avoiding Obstacles During Visual Navigation Authors: Cherubini, Andrea; Spindler, Fabien; Chaumette, Francois
    In this paper, we design and validate a new tentacle-based approach, for avoiding obstacles during appearance-based navigation with a wheeled mobile robot. In the past, we have developed a framework for safe visual navigation. The robot follows a path represented as a set of key images, and during obstacle circumnavigation, the on-board camera is actuated to maintain scene visibility. In those works, the model used for obstacle avoidance was obtained using a potential vector field. Here, a more sophisticated and efficient method, that exploits the robot kinematic model, and predicts collision at look-ahead distances, is designed and integrated in that framework. Outdoor experiments comparing the two models show that the new approach presents many advantages. Higher speeds and precision can be attained, very cluttered scenarios involving large obstacles can be successfully dealt with, and the control inputs are smoother.
  • Maintaining visibility constraints during tele-echography with ultrasound visual servoing Authors: LI, Tao; Kermorgant, Olivier; Krupa, Alexandre
    This paper presents a multi-task control method to maintain the visibility of an anatomic element of interest while the doctor tele-operates a 2D ultrasound probe held by a medical robot. The prior task consists in automatically maintaining several visual constraints that guarantee an intersection between the ultrasound image plane and the anatomic object of interest and the second task allows the medical expert to manually apply probe motion through tele-operation. Unlike classical visual servoing technique which continually regulate the current visual features to desired values, our control approach gradually activates the regulation of one or several ultrasound visual features that go close to fixed limits in such a way to keep them in a safe domain. The main advantage of this approach is to give to the clinician the control of all the degrees of freedom of the probe to examine the patient while automatically preserving the visibility of the element of interest if required. Both simulations and experiments performed on an abdominal phantom demonstrate the efficiency of the visibility assistance task.
  • Simple and Robust Visual Servo Control of Robot Arms Using an On-Line Trajectory Generator Authors: Kroeger, Torsten; Padial, Jose
    Common visual servoing methods use image features to define a signal error in the feedback loops of robot motion controllers. This paper suggests a new visual servo control scheme that uses an on-line trajectory generator as an intermediate layer between image processing algorithms and robot motion controllers. The motion generation algorithm is capable of computing an entire trajectory from an arbitrary initial state of motion within one servo control cycle (typically one millisecond or less). This algorithm is fed with desired pose and velocity signals that are generated by an image processing algorithm. The advantages of this new architecture are: (a) jerk-limited and continuous motions are guaranteed independently of image processing signals, (b) kinematic motion constraints as well as physical and/or artificial workspace limits can be directly considered, and (c) the system can instantaneously and safely react to sensor failures (e.g., if cameras are covered or image processing fails). Real-world experimental results using a seven-joint robot arm are presented to underline the relevance for the field of robust sensor-guided robot motion control.

Autonomy and Vision for UAVs

  • Cooperative Vision-Aided Inertial Navigation Using Overlapping Views Authors: Melnyk, Igor; Hesch, Joel; Roumeliotis, Stergios
    In this paper, we study the problem of Cooperative Localization (CL) for two robots, each equipped with an Inertial Measurement Unit (IMU) and a camera. We present an algorithm that enables the robots to exploit common features, observed over a sliding-window time horizon, in order to improve the localization accuracy of both robots. In contrast to existing CL methods, which require distance and/or bearing robot-to-robot observations, our algorithm infers the relative position and orientation (pose) of the robots using only the visual observations of common features in the scene. Moreover, we analyze the system observability properties to determine how many degrees of freedom (d.o.f.) of the relative transformation can be computed under different measurement scenarios. Lastly, we present simulation results to evaluate the performance of the proposed method.
  • UAV Vision: Feature Based Accurate Ground Target Localization through Propagated Initializations and Interframe Homographies Authors: Han, Kyuseo; Aeschliman, Chad; Park, Johnny; Kak, Avinash; Kwon, Hyukseong; Pack, Daniel
    Our work presents solutions to two related vexing problems in feature-based localization of ground targets in Unmanned Aerial Vehicle(UAV) images: (i) A good initial guess at the pose estimate that would speed up the convergence to the final pose estimate for each image frame in a video sequence; and (ii) Time-bounded estimation of the position of the ground target. We address both these problems within the framework of the ICP (Iterative Closest Point) algorithm that now has a rich tradition of usage in computer vision and robotics applications. We solve the first of the two problems by frame-to-frame propagation of the computed pose estimates for the purpose of the initializations needed by ICP. The second problem is solved by terminating the iterative estimation process at the expiration of the available time for each image frame. We show that when frame-to-frame homography is factored into the iterative calculations, the accuracy of the position calculated at the time of bailing out of the iterations is nearly always sufficient for the goals of UAV vision.
  • First Results in Autonomous Landing and Obstacle Avoidance by a Full-Scale Helicopter Authors: Scherer, Sebastian; Chamberlain, Lyle; Singh, Sanjiv
    Currently deployed unmanned rotorcraft rely on carefully preplanned missions and operate from prepared sites and thus avoid the need to perceive and react to the environment. Here we consider the problems of finding suitable but previously unmapped landing sites given general coordinates of the goal and planning collision free trajectories in real time to land at the “optimal” site. This requires accurate mapping, fast landing zone evaluation algorithms, and motion planning. We report here on the sensing, perception and motion planning integrated onto a full-scale helicopter that flies completely autonomously. We show results from 8 experiments for landing site selection and 5 runs at obstacles. These experiments have demonstrated the first autonomous full-scale helicopter that successfully selects its own landing sites and avoids obstacles.
  • Real-Time Onboard Visual-Inertial State Estimation and Self-Calibration of MAVs in Unknown Environments Authors: Weiss, Stephan; Achtelik, Markus W.; Lynen, Simon; Chli, Margarita; Siegwart, Roland
    The combination of visual and inertial sensors has proved to be very popular in MAV navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. We propose a navigation algorithm for MAVs equipped with a single camera and an IMU which is able to run onboard and in real-time. The main focus is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40 Hz on an onboard Atom computer 1.6 GHz.
  • Autonomous Landing of a VTOL UAV on a Moving Platform Using Image-Based Visual Servoing Authors: Lee, Daewon; Ryan, Tyler; Kim, H. Jin
    In this paper we describe a vision-based algorithm to control a vertical-takeoff-and-landing unmanned aerial vehicle while tracking and landing on a moving platform. Specifically, we use image-based visual servoing (IBVS) to track the platform in two-dimensional image space and generate a velocity reference command used as the input to an adaptive sliding mode controller. Compared with other vision-based control algorithms that reconstruct a full three-dimensional representation of the target, which requires precise depth estimation, IBVS is computationally cheaper since it is less sensitive to the depth estimation allowing for a faster method to obtain this estimate. To enhance velocity tracking of the sliding mode controller, an adaptive rule is described to account for the ground effect experienced during the maneuver. Finally, the IBVS algorithm integrated with the adaptive sliding mode controller for tracking and landing is validated in an experimental setup using a quadrotor.