Augmented virtuality for model-based teleoperation

2017 
Ground-based teleoperation of robots in space is subject to time delays of several seconds or more. This leads to the use of model-based approaches, where the operator interacts with a model (simulation) of the remote environment and the remote robot attempts to reproduce the results of that interaction. But, it is also desirable for the operator to view (delayed) images from the remote scene. These images, however, are often from one or more monocular cameras mounted on the robot end-effector, which leads to several other problems: unintuitive teleoperation due to the eye-in-hand configuration, limited field of view, and lack of stereo visualization. We present an augmented virtuality interface for teleoperation, which can solve these problems by projecting the real camera images onto a registered 3D model of the environment and allowing the operator to select any desired viewpoint. This approach is suitable when there is at least a partial model of the environment, as is the case for satellite servicing. The proposed method begins with a video survey to register the 3D model to the physical environment, followed by a user interface that presents a stereo visualization of the model, augmented by projections of real camera images onto the model. We quantitatively and qualitatively compare the augmented virtuality images to real camera images taken from the same viewpoint and perform experiments to evaluate the efficacy of the augmented virtuality paradigm for teleoperation. The results suggest that this approach can improve operator situation awareness, potentially leading to better performance, especially when the camera views are unintuitive or limited.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    9
    Citations
    NaN
    KQI
    []