Hijacking Tracker: A Powerful Adversarial Attack on Visual Tracking

2020 
Visual object tracking has made important breakthroughs with the assistance of deep learning models. Unfortunately, recent research has clearly proved that deep learning models are vulnerable to malicious adversarial attacks, which mislead the models making wrong decisions by perturbing the input image. The threat to the models alerts us to pay attention to the model security of deep learning- based tracking algorithms. Therefore, we study the adversarial attacks against advanced trackers based on deep learning to better identify the vulnerability of tracking algorithms. In this paper, we propose to add slight adversarial perturbations to the input image by an inconspicuous but powerful attack strategy—hijacking algorithm. Specifically, the hijacking strategy misleads trackers in two aspects: one is shape hijacking that changes the shape of the model output; the other is position hijacking that gradually pushes the output to any position in the image frame. Besides, we further propose an adaptive optimization approach to integrate two hijacking mechanisms efficiently. Eventually, the hijacking algorithm results in fooling the tracker to track the wrong target gradually. The experimental results demonstrate the powerful attack ability of our method—quickly hijacking state-of-the-art trackers and reducing the accuracy of these models by more than 90% on OTB2015.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    2
    Citations
    NaN
    KQI
    []