Human visual, eye movement and hand movement mechanisms underlying performance in telerobotic and virtual environment interfaces

1998 
This thesis draws upon previous research in the field of bioengineering, particularly biological control systems, to further elucidate human visual, eye movement and hand movement mechanisms in the context of controlling remote robots. For telerobotic control, the importance and effectiveness of utilizing computer-based models is outlined and demonstrated. As in the case of direct manual control, there is also a strong reliance upon internal cognitive models for human neurological control of the eyes and hands. The use of models for effective control is a central theme throughout this thesis. Two primary performance limitations in controlling robots at a distance are related to time delay of sensing/command between the human and robot and to a limited perception of depth of the remote environment. Engineering models of the robot and remote site have been embedded into computer control systems to help overcome these limitations. In addition, such model-based information has been shown to be an instrumental part of higher-level control strategies, in which the human plans tasks for the robot in the computer model, and a similar computer model is used to semi-autonomously control the robot at the remote site. This thesis has demonstrated that in fact a human could easily learn to utilize the computer model for control. Utilizing computer based-models to assist with telerobotic control requires that the human operators effectively understand and interact with the model. Virtual reality systems, consisting of head-mounted displays and body tracking devices, are often utilized as an interface between the human and the computer model. Human reaching movements were recorded and analyzed while using such a virtual reality system to indicate a target goal for a telerobot manipulator. Reaching movements in everyday life of a human with normal visual-motor functioning relies upon an accurate internal model of the reach, so that movements (initiated primarily open loop in a feedforward fashion) are made with grace and ease. By comparison, analysis of 3D reaching movements in the virtual environment revealed some of the difficulties and limitations in matching the internal visual-motor spatial model in a human's brain with a 3D spatial computer model embedded inside a computer. Finally, our eye movements visually scanning the world we perceive have been shown to be controlled in a top-down fashion by an internal model representation of the world within our brains. For static images and scenes, this human scanpath has been experimentally shown to be fairly repetitive, while idiosyncratic to the particular subject and the particular image. By incorporating smooth pursuit eye movements as extended fixations over features moving across the retina, this thesis has extended the scanpath theory to include dynamic stimuli. (Abstract shortened by UMI.)
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []