Degrading Detection Performance of Wireless IDSs Through Poisoning Feature Selection

2018 
Machine learning algorithms have been increasingly adopted in Intrusion Detection Systems (IDSs) and achieved demonstrable results, but few studies have considered intrinsic vulnerabilities of these algorithms in adversarial environment. In our work, we adopt poisoning attack to influence the accuracy of wireless IDSs that adopt feature selection algorithms. Specifically, we adopt the gradient poisoning method to generate adversarial examples which induce classifier to select a feature subset to make the classification error rate biggest. We consider the box-constrained problem and use Lagrange multiplier and backtracking line search to find the feasible gradient. To evaluate our method, we experimentally demonstrate that our attack method can influence machine learning, including filter and embedded feature selection algorithms using three benchmark network public datasets and a wireless sensor network dataset, i.e., KDD99, NSL-KDD, Kyoto 2006+ and WSN-DS. Our results manifest that gradient poisoning method causes a significant drop in the classification accuracy of IDSs about 20%.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    1
    Citations
    NaN
    KQI
    []