Learning from Demonstration Based on a Classification of Task Parameters and Trajectory Optimization

2019 
Learning from demonstration involves the extraction of important information from demonstrations and the reproduction of robot action sequences or trajectories with generalization capabilities. Task parameters represent certain dependencies observed in demonstrations used to constrain and define a robot action because of the infinite nature of the state-space environment. We present the methodology for learning from demonstration based on a classification of task parameters. The classified task parameters are used to construct a cost function, responsible for describing the demonstration data. For reproduction we propose a novel trajectory optimization that is able to generate a simplified version of the trajectory for different configurations of the task parameters. As the last step before reproduction on a real robotic arm we approximate this trajectory with a Dynamic movement primitive (DMP) - based system to retrieve a smooth trajectory. Results obtained for trajectories with three degrees of freedom (two translations and one rotation) show that the system is able to encode multiple task parameters from a low number of demonstrations and generate trajectories that are collision free.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    5
    Citations
    NaN
    KQI
    []