Evaluation of the prediction skill of stock assessment using hindcasting
2016
Abstract A major uncertainty in stock assessment is the difference between models and reality. The validation of model prediction is difficult, however, as fish stocks can rarely be observed and counted. We therefore show how hindcasting and model-free validation can be used to evaluate multiple measures of prediction skill. In a hindcast a model is fitted to the first part of a time series and then projected over the period omitted in the original fit. Prediction skill can then be evaluated by comparing the predictions from the projection with the observations. We show that uncertainty increased when different datasets and hypotheses were considered, especially as time-series of model-derived parameters were sensitive to model assumptions. Using hindcasting and model-free validation to evaluate prediction skill is an objective way to evaluate risk, i.e., to identify the uncertainties that matter. A hindcast is also a pragmatic alternative to hindsight, without the associated risks. While the use of multiple measures helps in evaluating prediction skill and to focus research onto the data and the processes that generated them.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
50
References
11
Citations
NaN
KQI