Learning nonlinear appearance manifolds for robot localization

2005 
We propose a nonlinear method for learning the low-dimensional pose of a robot from high-dimensional panoramic images. The panoramic images are assumed to lie on a nonlinear low-dimensional appearance manifold that is embedded in a high-dimensional image space. We demonstrate that the local geometry of a point and its nearest neighbors on this manifold can be used to project the point onto a low-dimensional coordinate space. Using this embedding, the unknown camera position can be estimated from a novel panoramic image. We show how the image-based position measurements can be integrated with odometry information in a Bayesian framework to yield an online estimate of a robot's position. Results from simulated data show that the proposed method outperforms other appearance-based models based upon principal components analysis and kernel density estimation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    21
    Citations
    NaN
    KQI
    []