Augmented Curiosity: Depth and Optical Flow Prediction for Efficient Exploration.

2019 
Exploring novel environments for a specific target poses the challenge of how to adequately provide positive external rewards to an artificial agent. In scenarios with sparse external rewards, a reinforcement learning algorithm often cannot develop a successful policy function to govern an agent’s behavior. However, intrinsic rewards can provide feedback on an agent’s actions and enable updates towards a proper policy function in sparse scenarios. Our approaches called the Optical Flow-Augmented Curiosity Module (OF-ACM) and Depth-Augmented Curiosity Module (D-ACM) extend the Intrinsic Curiosity Model (ICM) by Pathak et al. The ICM forms an intrinsic reward signal from the error between a prediction and the ground truth of the next state. Shown with experiments in visually rich and sparse feature scenarios in ViZDoom, our predictive modules exhibit improved exploration capabilities and learning of an ideal policy function. Our modules leverage additional sources of information, such as depth images and optical flow, to generate superior embeddings that serve as inputs for next state prediction. With D-ACM we show a 63.3% average improvement in time to convergence of a policy over ICM in “My Way Home” scenarios.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    14
    References
    1
    Citations
    NaN
    KQI
    []