On the Adversarial Robustness of Subspace Learning

2019 
In this paper, we investigate the adversarial robustness of subspace learning problems. Different from the scenario addressed by classic robust algorithms that assume fractions of data are corrupted, we consider a more powerful adversary who can observe the whole data and modify all of them. The goal of the adversary is to maximize the distance between the subspace learned from the original data set and that learned from the modified data. We characterize the optimal rank-one attack strategy and show that the optimal strategy depends on the smallest singular value of the original data matrix and the adversary’s energy budget.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    3
    Citations
    NaN
    KQI
    []