Deep Reinforcement Learning of Map-Based Obstacle Avoidance for Mobile Robot Navigation

2021 
Autonomous and safe navigation in complex environments without collisions is particularly important for mobile robots. In this paper, we propose an end-to-end deep reinforcement learning method for mobile robot navigation with map-based obstacle avoidance. Using the experience collected in the simulation environment, a convolutional neural network is trained to predict the proper steering operation of the robot based on its egocentric local grid maps, which can accommodate various sensors and fusion algorithms. We use dueling double DQN with prioritized experienced replay technology to update parameters of the network and integrate curriculum learning techniques to enhance its performance. The trained deep neural network is then transferred and executed on a real-world mobile robot to guide it to avoid local obstacles for long-range navigation. The qualitative and quantitative evaluations of the new approach were performed in simulations and real robot experiments. The results show that the end-to-end map-based obstacle avoidance model is easy to deploy, without any fine-tuning, robust to sensor noise, compatible with different sensors, and better than other related DRL-based models in many evaluation indicators.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    0
    Citations
    NaN
    KQI
    []