Evolutionary Algorithm Driven Explainable Adversarial Artificial Intelligence

2020 
It is well-known that machine learning algorithms can be susceptible to undesirable effects when exposed to conditions that are not expressed adequately in the training dataset. This leads to a growing interest throughout many communities; where do algorithms and trained models break? Recently, methods such as generative adversarial neural networks and variational autoencoders were proposed to create adversarial examples that challenge algorithms. This results in artificial intelligence having higher false detections or completely losing recognition. The problem is that existing solutions, are for the most part, black boxes. Current gaps include how do we better control and understand adversarial algorithms. Herein, we propose the concept of an adversarial modifier set as an understandable and controlled way to generate adversarial examples. This is achieved by exploiting the improved evolution-constructed algorithm to identify ideal features that a victim algorithm values in imagery. These features are combined to realize a tuple library that preserves spatial relations. Last, a set of algorithmically controlled modifiers that generate the imagery are found by examining the content of the false imagery. Preliminary results are encouraging and demonstrate that this approach has benefits in both generating explainable adversarial examples, as well as shedding some insight into victim algorithm decision making.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    1
    Citations
    NaN
    KQI
    []