language-icon Old Web
English
Sign In

Bayes estimator

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation. In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is maximum a posteriori estimation. Suppose an unknown parameter θ {displaystyle heta } is known to have a prior distribution π {displaystyle pi } . Let θ ^ = θ ^ ( x ) {displaystyle {widehat { heta }}={widehat { heta }}(x)} be an estimator of θ {displaystyle heta } (based on some measurements x), and let L ( θ , θ ^ ) {displaystyle L( heta ,{widehat { heta }})} be a loss function, such as squared error. The Bayes risk of θ ^ {displaystyle {widehat { heta }}} is defined as E π ( L ( θ , θ ^ ) ) {displaystyle E_{pi }(L( heta ,{widehat { heta }}))} , where the expectation is taken over the probability distribution of θ {displaystyle heta } : this defines the risk function as a function of θ ^ {displaystyle {widehat { heta }}} . An estimator θ ^ {displaystyle {widehat { heta }}} is said to be a Bayes estimator if it minimizes the Bayes risk among all estimators. Equivalently, the estimator which minimizes the posterior expected loss E ( L ( θ , θ ^ ) | x ) {displaystyle E(L( heta ,{widehat { heta }})|x)} for each x {displaystyle x} also minimizes the Bayes risk and therefore is a Bayes estimator. If the prior is improper then an estimator which minimizes the posterior expected loss for each x {displaystyle x} is called a generalized Bayes estimator. The most common risk function used for Bayesian estimation is the mean square error (MSE), also called squared error risk. The MSE is defined by where the expectation is taken over the joint distribution of θ {displaystyle heta } and x {displaystyle x} . Using the MSE as risk, the Bayes estimate of the unknown parameter is simply the mean of the posterior distribution, This is known as the minimum mean square error (MMSE) estimator. If there is no inherent reason to prefer one prior probability distribution over another, a conjugate prior is sometimes chosen for simplicity. A conjugate prior is defined as a prior distribution belonging to some parametric family, for which the resulting posterior distribution also belongs to the same family. This is an important property, since the Bayes estimator, as well as its statistical properties (variance, confidence interval, etc.), can all be derived from the posterior distribution.

[ "Bayesian probability", "Estimator", "spatial spectrum pattern", "bayesian estimator", "posterior risk", "Bayes error rate" ]
Parent Topic
Child Topic
    No Parent Topic