Incremental Bounded Model Checking of Artificial Neural Networks in CUDA

2019 
Artificial Neural networks (ANNs) are powerful computing systems employed for various applications due to their versatility to generalize and to respond to unexpected inputs/patterns. However, implementations of ANNs for safety-critical systems might lead to failures, which are scarcely predicted in the design phase as ANNs are highly parallel and their parameters are hardly interpretable. Here we develop and evaluate a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in multi-layer perceptron (MLP). We developed and evaluated a novel symbolic software verification framework based on incremental bounded model checking (BMC) to check for adversarial cases and coverage methods in a multi-layer perceptron(MLP). Besides, we developed an efficient SMT-based Context-Bounded Model Checker for Graphical Processing Units (ESBMC-GPU), to ensure the reliability of specific safety properties in which safety-critical systems can fail and make incorrect decisions at the cost of material damage or even put lives in danger. This paper marks the first symbolic verification framework to reason over ANNs implemented in CUDA. Our experimental results show that our approach implemented in ESBMC-GPU can successfully verify safety properties and covering methods in ANNs and correctly generate 28 adversarial cases in MLPs.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    2
    Citations
    NaN
    KQI
    []