Adversarial Attacks on Neural Networks-based Soft Sensors: Directly Attack Output

2021 
Neural networks-based soft sensors are wildly employed in the industrial process. Such models have great significance to smart manufacturing. Considering the strict requirements of industrial production, it is vital to ensure the safety and robustness of these models in their actual deployment. However, recent researches have shown that neural networks are quite vulnerable to adversarial attacks. By imposing tiny perturbation to original sample, the fabricated adversarial sample can cheat these models to make wrong decisions. Such phenomenon may bring serious trouble to the practical application of soft sensors. This paper focuses on the adversarial attacks on industrial soft sensors. For the first time, we verify and analyze the effectiveness and deficiencies of existing attack methods in the industrial soft sensor scenario. Based on solving these defects, this paper proposed a novel perspective for attacking soft sensors. We analyzed the optimization mechanism behind this new idea, then designed two algorithms to perform attacks. The proposed methods more conform to the actual situation. Besides, compared with existing approaches, the proposed methods have potentials to cause severer damages since their attacks are not only more concealed but also more likely to cheat the technicians to execute wrong operations. The researches and analyses of the proposed methods lay a solid foundation for more thorough defenses against various attacks, which is quite necessary for making the deployed soft sensors more robust and secure.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []