Machine Learning in the Hands of a Malicious Adversary: A Near Future If Not Reality1

2021 
Machine learning and artificial intelligence are being adopted to varying applications for automation and flexibility. Cyber security to be no different, researchers and engineers have been investigating the use of data‐driven technologies to harden the security of cyberinfrastructure and the possibility of attackers exploiting vulnerabilities in such technology (e.g. adversarial machine learning). However, not much work has investigated how attackers might try to take advantage of machine learning and AI technology against us. This chapter discusses the potential advances in targeted attacks through the utilization of machine learning techniques. In this chapter, we introduce a new concept of AI‐driven malware which advances already sophisticated cyber threats (i.e. advanced targeted attacks) that are on the rise. Furthermore, we demonstrate our prototype AI‐driven malware, built on top of a set of statistical learning technologies, on two distinct cyber‐physical systems (i.e. the Raven‐II surgical robot and a building automation system). Our experimental results demonstrate that with the support of AI technology, malware can mimic human attackers in deriving attack payloads that are custom to the target system and in determining the most opportune time to trigger the attack payload so to maximize the chance of success in realizing the malicious intent. No public records report a real threat driven by machine learning models. However, such advanced malware might already exist and simply remain undetected. We hope this chapter motivates further research on advanced offensive technologies, not to favor the adversaries, but to know them and be prepared.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []