Forecast evaluation with imperfect observations and imperfect models.

2018 
The field of statistics has become one of the mathematical foundations in forecast evaluations studies, especially in regard to computing scoring rules. The classical paradigm of proper scoring rules is to discriminate between two different forecasts by comparing them with observations. The probability density function of the observed record is assumed to be perfect as a verification benchmark. In practice, however, observations are almost always tainted by errors. These may be due to homogenization problems, instrumental deficiencies, the need for indirect reconstructions from other sources (e.g., radar data), model errors in gridded products like reanalysis, or any other data-recording issues. If the yardstick used to compare forecasts is imprecise, one can wonder whether such types of errors may or may not have a strong influence on decisions based on classical scoring rules. Building on the recent work of Ferro (2017), we propose a new scoring rule scheme in the context of models that incorporate errors of the verification data, we compare it to existing methods, and applied it to various setups, mainly a Gaussian additive noise model and a gamma multiplicative noise model. In addition, we frame the problem of error verification in datasets as scoring a model that jointly couples forecasts and observation distributions. This is strongly connected to the so-called error-in-variables models in statistics.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    21
    References
    4
    Citations
    NaN
    KQI
    []