A High Probability Safety Guarantee for Shifted Neural Network Surrogates.

2020 
Embedding simulation models developed during the design of a platform opens a lot of potential new functionalities but requires additional certification. Usually, these models require too much computing power, take too much time to run so we need to build an approximation of these models that can be compatible with operational constraints, hardware constraints, and real-time constraints. Also, we need to prove that the decisions made by the system using the surrogate model instead of the reference one will be safe. The confidence in its safety has to be demonstrated to certification authorities. In cases where safety can be ensured by systematically over-estimating the reference model, we propose different probabilistic safety bounds that we apply on a braking distance use-case. We also derive a new loss function suited for shifted surrogates and study the influence of the different confidence parameters on the trade-off between the safety and accuracy of the surrogate models. The main contributions of the paper are: (i) We define safety as the fact that a surrogate model should over-estimate the reference model with high probability. (ii) We use Bernstein-type deviation inequalities to estimate the probability of under-estimating a reference model with a surrogate model. (iii) We show how to shift a surrogate to guarantee safeness with high probability. (iv) Since shifting impacts the performance of our surrogate, we derive a new regression loss function---that we call SMSE---in order to build surrogates with safeness-promoting constraints.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    8
    References
    1
    Citations
    NaN
    KQI
    []