Detecting Adversarial Examples for Time Series Classification and Its Performance Evaluation

2021 
As deep learning techniques have become increasingly used in real-world applications, their vulnerabilities have received significant attention from deep learning researchers and practitioners. In particular, adversarial example on deep neural networks and protection methods against them have been well-studied in recent years because such networks have serious vulnerabilities that threaten safety in the real world. This paper proposes a detection method against adversarial examples for time series classification. Time series classification is a task that predicts the class label that an unlabeled time series belongs to. To protect time series classification from attacks using adversarial examples, we propose three types of methods detecting adversarial examples for time series classification: 2n-class-based (2NCB) detection, 2-class (2CB) detection, and feature vector-based (FVB) detection methods. Moreover, we propose the ensemble method, which detects adversarial examples by using majority vote of the three aforementioned methods. Experimental results show that the proposed methods are superior to the conventional method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []