A Joint Learning Framework of Visual Sensory Representation, Eye Movements and Depth Representation for Developmental Robotic Agents

2017 
In this paper, we propose a novel visual learning framework for developmental robotics agents which mimics the developmental learning concept from human infants. It can be applied to an agent to autonomously perceive depths by simultaneously developing its visual sensory representation, eye movement control, and depth representation knowledge through integrating multiple visual depth cues during self-induced lateral body movement. Based on the active efficient coding theory (AEC), a sparse coding and a reinforcement learning are tightly coupled with each other by sharing a unify cost function to update the performance of the sensory coding model and eye motor control. The generated multiple eye motor control signals for different visual depth cues are used together as inputs for the multi-layer neural networks for representing the given depth from simple human-robot interaction. We have shown that the proposed learning framework, which is implemented on the Hoap-3 humanoid robot simulator, can effectively learn to autonomously develop the sensory visual representation, eye motor control, and depth perception with self-calibrating ability at the same time.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    0
    Citations
    NaN
    KQI
    []