Bayes, Reproducibility and the Quest for Truth
2016
We consider the use of default priors in the Bayes methodology
for seeking information concerning the true value of a parameter. By default
prior, we mean the mathematical prior as initiated by Bayes [Philos.
Trans. R. Soc. Lond. 53 (1763) 370–418] and pursued by Laplace [Theorie
Analytique des Probabilites (1812) Courcier], Jeffreys [Theory of Probability
(1961) Clarendon Press], Bernardo [J. Roy. Statist. Soc. Ser. B 41 (1979)
113–147] and many more, and then recently viewed as “potentially dangerous”
[Science 340 (2013) 1177–1178] and “potentially useful” [Science
341 (2013) 1452]. We do not mean, however, the genuine prior [Science 340
(2013) 1177–1178] that has an empirical reference and would invoke standard
frequency modelling. And we do not mean the subjective or opinion
prior that an individual might have and would be viewed as specific to that
individual. A mathematical prior has no referenced frequency information,
but on occasion is known otherwise to lead to repetition properties called
confidence. We investigate the presence of such supportive property, and ask
can Bayes give reliability for other than the particular parameter weightings
chosen for the conditional calculation. Thus, does the methodology have reproducibility?
Or is it a leap of faith.
For sample-space analysis, recent higher-order likelihood methods with
regular models show that third-order accuracy is widely available using profile
contours [In Past, Present and Future of Statistical Science (2014) 237–
252 CRC Press].
But for parameter-space analysis, accuracy is widely limited to first order.
An exception arises with a scalar full parameter and the use of the scalar
Jeffreys [J. Roy. Statist. Soc. Ser. B 25 (1963) 318–329]. But for vector full
parameter even with a scalar interest parameter, difficulties have long been
known [J. Roy. Statist. Soc. Ser. B 35 (1973) 189–233] and with parameter
curvature, accuracy beyond first order can be unavailable [Statist. Sci. 26 (2011) 299–316].We show, however, that calculations on the parameter space
can give full second-order information for a chosen scalar interest parameter;
these calculations, however, require a Jeffreys prior that is used fully
restricted to the one-dimensional profile for that interest parameter. Such a
prior is effectively data-dependent and parameter-dependent and is focally
restricted to the one-dimensional contour; these priors fall outside the usual
Bayes approach and yet with substantial calculations can still give less than
frequency analysis.
We provide simple examples using discrete extensions of Jeffreys prior.
These serve as counter-examples to general claims that Bayes can offer accuracy
for statistical inference. To obtain this accuracy with Bayes, more
effort is required compared to recent likelihood methods, which still remain
more accurate. And with vector full parameters, accuracy beyond first order
is routinely not available, as a change in parameter curvature causes Bayes
and frequentist values to change in opposite direction, yet frequentist has full
reproducibility.
An alternative is to view default Bayes as an exploratory technique and
then ask does it do as it overtly claims? Is it reproducible as understood in
contemporary science? The posterior gives a distribution for an interest parameter
and, thereby, a quantile for the interest parameter; an oracle could
record whether it was left or right of the true value. If the average split in
evaluative repetitions is in accord with the nominal level, then the approach
is providing accuracy. And if not, then what is up, other than performance
specific to the parameter frequencies in the prior. No one has answers although
speculative claims abound.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
15
References
10
Citations
NaN
KQI