Learning to Navigate in Human Environments via Deep Reinforcement Learning

2019 
Mobile robots have been widely applied in human populated environments. To interact with humans, the robots require the capacity to navigate safely and efficiently in complex environments. Recent works have successfully applied reinforcement learning to learn socially normative navigation behaviors. However, they mostly focus on modeling human-robot cooperations and neglect complex interactions between pedestrians. In addition, these methods are implemented using assumptions of perfect sensing about the states of pedestrians, which makes the model less robust to the perception uncertainty. This work presents a novel algorithm to learn an efficient navigation policy that exhibits socially normative navigation behaviors. We propose to employ convolutional social pooling to jointly capture human-robot cooperations and inter-human interactions in an actor-critic reinforcement learning framework. In addition, we propose to focus on partial observability in socially normative navigation. Our model is capable to learn the representation of unobservable states with recurrent neural networks and further improves the stability of the algorithm. Experimental results show that the proposed learning algorithm enables robots to learn socially normative navigation behaviors and achieves a better performance than state-of-the-art methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []