language-icon Old Web
English
Sign In

Consistent estimator

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one. In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to θ0. This means that the distributions of the estimates become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one. In practice one constructs an estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, and consistency is a property of what occurs as the sample size “grows to infinity”. If the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator; otherwise the estimator is said to be inconsistent. Consistency as defined here is sometimes referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent. Consistency is related to bias; see bias versus consistency. Loosely speaking, an estimator Tn of parameter θ is said to be consistent, if it converges in probability to the true value of the parameter: A more rigorous definition takes into account the fact that θ is actually unknown, and thus the convergence in probability must take place for every possible value of this parameter. Suppose {pθ: θ ∈ Θ} is a family of distributions (the parametric model), and Xθ = {X1, X2, … : Xi ~ pθ} is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example we estimate the location parameter of the model, but not the scale: Suppose one has a sequence of observations {X1, X2, ...} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, one can use the sample mean: Tn = (X1 + ... + Xn)/n. This defines a sequence of estimators, indexed by the sample size n. From the properties of the normal distribution, we know the sampling distribution of this statistic: Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, ( T n − μ ) / ( σ / n ) {displaystyle scriptstyle (T_{n}-mu )/(sigma /{sqrt {n}})} has a standard normal distribution: as n tends to infinity, for any fixed ε > 0. Therefore, the sequence Tn of sample means is consistent for the population mean μ (recalling that Φ {displaystyle Phi } is the cumulative distribution of the normal distribution).

[ "Mean squared error", "Minimum-variance unbiased estimator", "Estimator", "Rao–Blackwell theorem", "Newey–West estimator", "Invariant estimator", "mean squared error matrix", "Consistent Estimation" ]
Parent Topic
Child Topic
    No Parent Topic