language-icon Old Web
English
Sign In

Wald test

In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance. Suppose n ( θ ^ n − θ ) → D N ( 0 , V ) {displaystyle {sqrt {n}}({hat { heta }}_{n}- heta ){xrightarrow {mathcal {D}}}N(0,V)} . Then, by Slutsky's theorem and by the properties of the normal distribution, multiplying by R has distribution: In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate. Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance. Together with the Lagrange multiplier and the likelihood-ratio test, the Wald test is one of three classical approaches to hypothesis testing. An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic. That is because the Wald statistic is derived from a Taylor expansion, and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients. Another aberration, known as the Hauck–Donner effect, can occur in binomial models when the estimated (unconstrained) parameter is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the unconstrained and constraint parameter. Under the Wald test, the estimated θ ^ {displaystyle {hat { heta }}} that was found as the maximizing argument of the unconstrained likelihood function is compared with a hypothesized value θ 0 {displaystyle heta _{0}} . In particular, the squared difference θ ^ − θ 0 {displaystyle {hat { heta }}- heta _{0}} is weighted by the curvature of the log-likelihood function.

[ "Applied mathematics", "Statistical hypothesis testing", "Statistics", "Econometrics", "Machine learning", "wald type statistic" ]
Parent Topic
Child Topic
    No Parent Topic