Frame-Correlation Transfers Trigger Economical Attacks on Deep Reinforcement Learning Policies.

2021 
Adversarial attack can be deemed as a necessary prerequisite evaluation procedure before the deployment of any reinforcement learning (RL) policy. Most existing approaches for generating adversarial attacks are gradient based and are extensive, viz., perturbing every pixel of every frame. In contrast, recent advances show that gradient-free selective perturbations (i.e., attacking only selected pixels and frames) could be a more realistic adversary. However, these attacks treat every frame in isolation, ignoring the relationship between neighboring states of a Markov decision process; thus resulting in high computational complexity that tends to limit their real-world plausibility due to the tight time constraint in RL. Given the above, this article showcases the first study of how transferability across frames could be exploited for boosting the creation of minimal yet powerful attacks in image-based RL. To this end, we introduce three types of frame-correlation transfers (FCTs) (i.e., anterior case transfer, random projection-based transfer, and principal components-based transfer) with varying degrees of computational complexity in generating adversaries via a genetic algorithm. We empirically demonstrate the tradeoff between the complexity and potency of the transfer mechanism by exploring four fully trained state-of-the-art policies on six Atari games. Our FCTs dramatically speed up the attack generation compared to existing methods, often reducing the computation time required to nearly zero; thus, shedding light on the real threat of real-time attacks in RL.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []