Neural Network Laundering: Removing Black-Box Backdoor Watermarks from Deep Neural Networks

2021 
Abstract Creating a state-of-the-art deep-learning system requires vast amounts of data, expertise, and hardware, yet research into copyright protection for neural networks has been limited. One of the main methods for achieving such protection involves relying on the susceptibility of neural networks to backdoor attacks in order to inject a watermark into the network, but the robustness of these tactics has been primarily evaluated against pruning, fine-tuning, and model inversion attacks. In this work, we propose an offensive neural network “laundering” algorithm to remove these backdoor watermarks from neural networks even when the adversary has no prior knowledge of the structure of the watermark. We can effectively remove watermarks used for recent defense or copyright protection mechanisms while retaining test accuracies on the target task above 97% and 80% for both MNIST and CIFAR-10, respectively. For all watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims. We also demonstrate the feasibility of our algorithm in more complex tasks as well as in more realistic scenarios where the adversary can carry out efficient laundering attacks using less than 1% of the original training set size, demonstrating that existing watermark-embedding procedures are not sufficient to reach their claims.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    12
    Citations
    NaN
    KQI
    []