Uncertainty quantification for hydrological models based on neural networks: the dropout ensemble

2021 
The use of neural networks in hydrology has been frequently undermined by limitations regarding the quantification of uncertainty in predictions. Many authors have proposed different methodologies to overcome these limitations, such as running Monte Carlo simulations, Bayesian approximations, and bootstrapping training samples, which come with computational limitations of their own, and two-step approaches, among others. One less-frequently explored alternative is to repurpose the dropout scheme during inference. Dropout is commonly used during training to avoid overfitting. However, it may also be activated during the testing period to effortlessly provide an ensemble of multiple “sister” predictions. This study explores the predictive uncertainty in hydrological models based on neural networks by comparing a multiparameter ensemble to a dropout ensemble. The dropout ensemble shows more reliable coverage of prediction intervals, while the multiparameter ensemble results in sharper prediction intervals. Moreover, for neural network structures with optimal lookback series, both ensemble strategies result in similar average interval scores. The dropout ensemble, however, benefits from requiring only a single calibration run, i.e., a single set of parameters. In addition, it delivers important insight for engineering design and decision-making with no increase in computational cost. Therefore, the dropout ensemble can be easily included in uncertainty analysis routines and even be combined with multiparameter or multimodel alternatives.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    7
    Citations
    NaN
    KQI
    []