language-icon Old Web
English
Sign In

Empirical Bayes method

Empirical Bayes methods are procedures for statistical inference in which the prior distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Empirical Bayes, also known as maximum marginal likelihood, represents one approach for setting hyperparameters. Empirical Bayes methods are procedures for statistical inference in which the prior distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out. Empirical Bayes, also known as maximum marginal likelihood, represents one approach for setting hyperparameters. Empirical Bayes methods can be seen as an approximation to a fully Bayesian treatment of a hierarchical Bayes model. In, for example, a two-stage hierarchical Bayes model, observed data y = { y 1 , y 2 , … , y n } {displaystyle y={y_{1},y_{2},dots ,y_{n}}} are assumed to be generated from an unobserved set of parameters θ = { θ 1 , θ 2 , … , θ n } {displaystyle heta ={ heta _{1}, heta _{2},dots , heta _{n}}} according to a probability distribution p ( y ∣ θ ) {displaystyle p(ymid heta ),} . In turn, the parameters θ {displaystyle heta } can be considered samples drawn from a population characterised by hyperparameters η {displaystyle eta ,} according to a probability distribution p ( θ ∣ η ) {displaystyle p( heta mid eta ),} . In the hierarchical Bayes model, though not in the empirical Bayes approximation, the hyperparameters η {displaystyle eta ,} are considered to be drawn from an unparameterized distribution p ( η ) {displaystyle p(eta ),} . Information about a particular quantity of interest θ i {displaystyle heta _{i};} therefore comes not only from the properties of those data which directly depend on it, but also from the properties of the population of parameters θ {displaystyle heta ;} as a whole, inferred from the data as a whole, summarised by the hyperparameters η {displaystyle eta ;} . Using Bayes' theorem, In general, this integral will not be tractable analytically or symbolically and must be evaluated by numerical methods. Stochastic (random) or deterministic approximations may be used. Example stochastic methods are Markov Chain Monte Carlo and Monte Carlo sampling. Deterministic approximations are discussed in quadrature.

[ "Bayes' theorem" ]
Parent Topic
Child Topic
    No Parent Topic