Learning the visual-oculomotor transformation

2015 
Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulation results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head. Saccadic movements are used to create a representation of the space.Accurate saccades are generated by a model of the cerebellum.A recurrent architecture is employed instead of the classical feed-back error learning.Proposed solution outperforms feedback-error-learning.Framework has been implemented successfully on humanoid robot.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    57
    References
    8
    Citations
    NaN
    KQI
    []