Vulnerability assessment of machine learning based malware classification models.

2019 
The primary focus of the machine learning model is to train a system to achieve self-reliance. However, due to the absence of the inbuilt security functions the learning phase itself is not secured which allows attacker to exploit the security vulnerabilities in the machine learning model. When a malicious adversary manipulates the input data, it exploits vulnerabilities of machine learning algorithms which can compromise the entire system. In this research study, we are conducting a vulnerability assessment of the malware classification model by injecting the datasets with an adversarial example to degrade the quality of classification obtained currently by a trained model. The objective is to find the security gaps that are exploitable in the model. The vulnerability assessment is done by introducing the malware classification model to an AML environment using the Black-Box attack. The simulation provided an insight into the inputs injected into the classifiers and proves the inherent security vulnerability exists in the classification model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []