Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.
This paper introduces a novel framework for estimating the motion of a robotic car from image information (a.k.a. visual odometry). Most current monocular visual odometry algorithms rely on a calibrated camera model and recover relative rotation and translation by tracking image features and applying geometrical constraints. This approach has some drawbacks: translation is recovered up to a scale, it requires camera calibration, and uncertainty estimates are not directly obtained. We propose an alternative approach that involves the use of semi-parametric statistical models as means to recover scale, infer camera parameters and provide uncertainty estimates given a training dataset. As opposed to conventional non-parametric machine learning procedures, where standard models for egomotion would be neglected, we present a novel framework in which the existing parametric models and powerful non-parametric Bayesian learning procedures are combined. We devise a multiple output Gaussian Process procedure, named Coupled GP, that uses a parametric model as the mean function and a non-stationary covariance function to map image features directly into vehicle motion. Additionally, this procedure is also able to infer joint uncertainty estimates for rotation and translation. Experiments performed using data collected from a single camera under challenging conditions show that this technique outperforms traditional methods in trajectories of several kilometers.
Questions and AnswersYou need to be logged in to be able to post here.