A Brief Survey: Deep Reinforcement Learning in Mobile Robot Navigation

2020 
Conventional navigation techniques have mainly relied on a global information approach, wherein pre-built laser or camera environment maps are used to construct a path from a given start to destination. While these methods have seen much success, they are mainly confined to operate in simple and relatively static environments. Not only is substantial effort required for prior mapping, there is no ability for these navigation systems to learn and generalize to new unseen places. These related problems have motivated researchers to turn to machine learning approaches. In particular, the advent of Deep Reinforcement Learning (DRL) has shown much promises in tasks like context-awareness, navigating in dynamic environment, and map-less navigation. This paper attempts to survey some recent DRL papers, examining the underlying foundation for applying DRL to navigation, and highlighting five key limitations: (1) low sample efficiency, (2) the gap from simulation to real, (3) vulnerability to being trapped in local dead corners, (4) deficient collision avoidance in dynamic environments such as multi-pedestrian and multi-agents environments, (5) and lack of proper evaluation benchmark. We argue that these limitations must be addressed before the pervasive use of service robots in human society.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []