Adversarial Robust Model Compression using In-Train Pruning

2021 
Efficiently deploying learning-based systems on embedded hardware is challenging for various reasons, two of which are considered in this paper: The model’s size and its robustness against attacks. Both need to be addressed even-handedly. We combine adversarial training and model pruning in a joint formulation of the fundamental learning objective during training. Unlike existing post-train pruning approaches, our method does not use heuristics and eliminates the need for a pre-trained model. This allows for a classifier which is robust against attacks and enables better compression of the model, reducing its computational effort. In comparison to prior work, our approach yields 6.21 pp higher accuracy for an 85 % reduction in parameters for ResNet20 on the CIFAR-10 dataset.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    31
    References
    0
    Citations
    NaN
    KQI
    []