Reinforcement Learning for Assisted Visual-Inertial Robotic Calibration

2020 
We present a new approach to assisted intrinsic and extrinsic calibration with an observability-aware visual-inertial calibration system that guides the user through the calibration procedure by suggesting easy-to-perform motions that render the calibration parameters observable. This is done by identifying which subset of the parameter space is rendered observable with a rank-revealing decomposition of the Fisher information matrix, modeling calibration as a Markov decision process and using reinforcement learning to establish which discrete sequence of motions optimizes for the regression of the desired parameters. The goal is to address the assumption common to most calibration solutions: that sufficiently informative motions are provided by the operator. We do not make use of a process model and instead leverage an experience based approach that is broadly applicable to any platform. This is a step in the direction of long term autonomy and “power-on-and-go” robotic systems, making repeatable and reliable calibration accessible to the non-expert operator.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    2
    Citations
    NaN
    KQI
    []