General Bayesian $L^2$ calibration of mathematical models.

2021 
A general Bayesian method for $L^2$ calibration of a mathematical model is presented. General Bayesian inference starts with the specification of a loss function. Then, the log-likelihood in Bayes' theorem is replaced by the negative loss. While the minimiser of the loss function is unchanged by, for example, multiplying the loss by a constant, the same is not true of the resulting general posterior distribution. To address this problem in the context of $L^2$ calibration of mathematical models, different automatic scalings of the general Bayesian posterior are proposed. These are based on equating asymptotic properties of the general Bayesian posterior and the minimiser of the $L^2$ loss.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    0
    Citations
    NaN
    KQI
    []