A Gesture Learning Interface for Simulated Robot Path Shaping With a Human Teacher

2014 
Recognition of human gestures is an active area of research integral for the development of intuitive human-machine interfaces for ubiquitous computing and assistive robotics. In particular, such systems are key to effective environmental designs that facilitate aging in place. Typically, gesture recognition takes the form of template matching in which the human participant is expected to emulate a choreographed motion as prescribed by the researchers. A corresponding robotic action is then a one-to-one mapping of the template classification to a library of distinct responses. In this paper, we explore a recognition scheme based on the growing neural gas (GNG) algorithm that places no initial constraints on the user to perform gestures in a specific way. Motion descriptors extracted from sequential skeletal depth data are clustered by GNG and mapped directly to a robotic response that is refined through reinforcement learning. A simple good/bad reward signal is provided by the user. This paper presents results that show that the topology-preserving quality of GNG allows generalization between gestured commands. Experimental results using an automated reward are presented that compare learning results involving single nodes versus results involving the influence of node neighborhoods. Although separability of input data influences the speed of learning convergence for a given neighborhood radius, it is shown that learning progresses toward emulation of an associative memory that maps input gesture to desired action.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []