Operational calibration: debugging confidence errors for DNNs in the field
2020
Trained DNN models are increasingly adopted as integral parts of software systems, but they often perform deficiently in the field. A particularly damaging problem is that DNN models often give false predictions with high confidence, due to the unavoidable slight divergences between operation data and training data. To minimize the loss caused by inaccurate confidence, operational calibration, i.e., calibrating the confidence function of a DNN classifier against its operation domain, becomes a necessary debugging step in the engineering of the whole system.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
52
References
0
Citations
NaN
KQI