Detecting Operational Adversarial Examples for Reliable Deep Learning
2021
The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical applications. Sound verification and validation methods are needed to assure the safe and reliable use of DL. However, state-of-the-art debug testing methods on DL that aim at detecting adversarial examples (AEs) ignore the operational profile, which statistically depicts the software’s future operational use. This may lead to very modest effectiveness on improving the software’s delivered reliability, as the testing budget is likely to be wasted on detecting AEs that are unrealistic or encountered very rarely in real-life operation. In this paper, we first present the novel notion of “operational AEs” which are AEs that have relatively high chance to be seen in future operation. Then an initial design of a new DL testing method to efficiently detect “operational AEs” is provided, as well as some insights on our prospective research plan.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
12
References
4
Citations
NaN
KQI