Improving exploration efficiency of deep reinforcement learning through samples produced by generative model

2021 
Abstract Deep reinforcement learning (DRL) has made remarkable achievements in artificial intelligence. However, it relies on stochastic exploration that suffers from low efficiency, especially in the early learning stages, of which the time complexity is nearly exponential. To solve the problem, an algorithm, referred to as Generative Action Selection through Probability (GRASP), is proposed to improve exploration in reinforcement learning. The primary insight is to reshape exploration spaces to limit the choice of exploration behaviors. More specifically, GRASP trains a generator to generate the exploration spaces from demonstrations by generative adversarial network (GAN). And then the agent selects actions from new exploration spaces via modified ϵ -greedy algorithm to incorporate GRASP with existing standard deep reinforcement learning algorithms. Experiment results showed that deep reinforcement learning equipped with GRASP demonstrated significant improvements in simulated environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []