language-icon Old Web
English
Sign In

Local asymptotic normality

In statistics, local asymptotic normality is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of iid sampling from a regular parametric model. In statistics, local asymptotic normality is a property of a sequence of statistical models, which allows this sequence to be asymptotically approximated by a normal location model, after a rescaling of the parameter. An important example when the local asymptotic normality holds is in the case of iid sampling from a regular parametric model. The notion of local asymptotic normality was introduced by Le Cam (1960). A sequence of parametric statistical models { Pn,θ: θ ∈ Θ } is said to be locally asymptotically normal (LAN) at θ if there exist matrices rn and Iθ and a random vector Δn,θ ~ N(0, Iθ) such that, for every converging sequence hn → h, where the derivative here is a Radon–Nikodym derivative, which is a formalised version of the likelihood ratio, and where o is a type of big O in probability notation. In other words, the local likelihood ratio must converge in distribution to a normal random variable whose mean is equal to minus one half the variance: The sequences of distributions P n , θ + r n − 1 h n {displaystyle P_{!n, heta +r_{n}^{-1}h_{n}}} and P n , θ {displaystyle P_{n, heta }} are contiguous. The most straightforward example of a LAN model is an iid model whose likelihood is twice continuously differentiable. Suppose { X1, X2, …, Xn } is an iid sample, where each Xi has density function f(x, θ). The likelihood function of the model is equal to If f is twice continuously differentiable in θ, then Plugging in δ θ = h / n {displaystyle delta heta =h/{sqrt {n}}} , gives By the central limit theorem, the first term (in parentheses) converges in distribution to a normal random variable Δθ ~ N(0, Iθ), whereas by the law of large numbers the expression in second parentheses converges in probability to Iθ, which is the Fisher information matrix:

[ "Asymptotic distribution", "Asymptotic analysis" ]
Parent Topic
Child Topic
    No Parent Topic