A critical evaluation of uncertainty estimation in neural networks

2021 
Quantifying the predictive uncertainty of Neural Network (NN) models remains a dificult, unsolved problem especially since the ground truth is usually not available. In this work we evaluate many regression uncertainty estimation models and discuss their accuracy using training sets where the uncertainty is known exactly. We compare three regression models, a homoscedastic model, a heteroscedastic model, and a quantile model and show that: while all models can learn an accurate estimation of response, the accurate estimation of uncertainty is very difficult; the quantile model has the best performance in estimating uncertainty; model bias is confused with uncertainty and it is very difficult to disentangle the two when we have only one measurement per training point; improved accuracy of the estimated uncertainty is possible, but the experimental cost for learning uncertainty is very large since it requires multiple estimations of the response almost everywhere in the input space.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []