Towards Robust Neural Networks via Orthogonal Diversity.

2021 
Deep Neural Networks (DNNs) are vulnerable to invisible perturbations on the images generated by adversarial attacks, which raises researches on the adversarial robustness of DNNs. A series of methods represented by the \textit{adversarial training} and its variants have proved the most practical techniques in enhancing the DNN robustness. Generally, adversarial training focuses on enriching the training data by involving perturbed data into clean data. Despite of the efficiency on defending specific attacks, adversarial training essentially benefits from the data augmentation, but does not contribute to the robustness of DNN itself, and usually suffers accuracy drop on clean data as well as inefficiency on unknown attacks. Towards the robustness of DNN itself, we propose a novel defense that aims at augmenting the model in order to learn features adaptive to diverse inputs, including adversarial examples. Specifically, we introduce multiple paths to augment the network, and impose orthogonality constraint on these paths. In addition, a margin-maximization loss is designed to further boost DIversity via Orthogonality (DIO). Extensive empirical results on various data sets, architectures, and attacks demonstrate the robustness of DIO: it does not need any adversarial example and yet achieves greater robustness compared with state-of-the-art adversarial training methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    0
    Citations
    NaN
    KQI
    []