Expectation-driven autonomous learning and interaction system

2008 
We introduce our latest autonomous learning and interaction system instance ALIS 2. It comprises different sensing modalities for visual (depth blobs, planar surfaces, motion) and auditory (speech, localization) signals and self-collision free behavior generation on the robot ASIMO. The system design emphasizes the split into a completely autonomous reactive layer and an expectation generation layer. Different feature channels can be classified and named with arbitrary speech labels in on-line learning sessions. The feasibility of the proposed approach is shown by interaction experiments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    24
    Citations
    NaN
    KQI
    []