Task Dependent Trajectory Learning from Multiple Demonstrations Using Movement Primitives

2019 
We propose a model for learning robot task constrained movements from a finite number of observed human demonstrations. The model uses the variation between demonstrations to extract important parts of the movements and reproduce trajectories accordingly. Regions with low variability are reproduced in a constrained manner, while regions with higher variability are approximated more loosely to achieve shorter trajectories. The demonstrations are sampled into states and an initial state sequence is chosen by a minimum distance criterion. Then, a method for state variation analysis is proposed that weights the states according to its similarity to all the other states. A custom function is constructed based on the state-variability information. The time function is then coupled with a state driven dynamical system to reproduce the trajectories. We test the approach on typical two-dimensional task constrained trajectories with constrains on the beginning, in the middle and the end of the movement. The approach is further compared with the case of using a standard exponentially decayed time function.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    15
    References
    0
    Citations
    NaN
    KQI
    []