Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation
2021
Deep neural networks (DNNs) have been shown lack of
robustness, as they are vulnerable to small perturbations on the
inputs. This has led to safety concerns on applying DNNs to
safety-critical domains. Several verification approaches based on
constraint solving have been developed to automatically prove or
disprove safety properties for DNNs. However, these approaches
suffer from the scalability problem, i.e., only small DNNs can be
handled. To deal with this, abstraction based approaches have been
proposed, but are unfortunately facing the precision problem, i.e.,
the obtained bounds are often loose. In this paper, we focus on a
variety of local robustness properties and a
$$(\delta,\varepsilon)$$
-global robustness property of DNNs, and
investigate novel strategies to combine the constraint solving and
abstraction-based approaches to work with these properties:
We implement our methods in the tool PRODeep, and conduct detailed
experimental results on several benchmarks
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
62
References
1
Citations
NaN
KQI