Robotic Manipulation Skill Acquisition via Demonstration Policy learning
2021
Current robots can perform repetitive tasks well, but are constrained to environment and task variations. Teaching a robot by demonstration is a powerful approach to solve the problem. The learning methods using large sensory and joint state information are extremely difficult to efficiently learn demonstration policy. This paper proposes a learning-by-imitation approach that learns demonstration policy for robotic manipulation skill acquisition from what-where-how interaction data. The method can improve the robotic adaptability to environment and tasks with fewer training inputs. RGB-D image interaction demonstration is used. At each time step we interact with an object and select a high-level action. The demonstration is formed through multi-step interactions. An imitation learning architecture (OPLN) consists of objects list network (OLN) and policy learning network (PLN) is proposed. OLN and PLN are constructed respectively with Long Short-Term Memory (LSTM) neural networks. OLN learns objects sequence feature extracted from demonstration data while PLN learns policy. An action and a target object are obtained as outputs to control robot’s manipulation. The experiments show that the Block Stacking skill and Pick and Place skill can be successfully acquired, and the method can adapt to environment variations and generalize to similar tasks.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI