Motion Planning for a Snake Robot using Double Deep Q-Learning

2021 
Motion planning for a snake robot in an unknown complex environment is a long-standing research problem because of the complex control of the modular mechanism. We propose deep reinforcement learning-based novel framework for motion planning. In this model-free framework, we propose a double deep Q-learning-based technique to learn the optimal policy for reaching the goal point from a random start point; in a minimum number of steps in various unknown environments. In this approach, the agent learns to minimize the distance between the current and goal positions by aligning its yaw angle to the goal points through controlling multiple locomotive gaits. For experimental evaluation, we trained and tested the model in obstacle-free terrains. For training, we selected the model on the mud-terrain and tested for 50 episodes on five different terrains concrete, default, metallic, mud, and wooden. From simulation results, we observe the learned-optimal policy shows promising results for all unknown environments with a performance efficiency of 100% for all terrains except the Wooden-terrain Where it fails for only one episode and achieves 98% efficiency.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []