language-icon Old Web
English
Sign In

Score test

In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance. In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, it has an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948, a fact that can be used to determine statistical significance. Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by S. D. Silvey in 1959, which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and Pagan's much-cited 1980 paper. The main advantage of the score test over the Wald test and likelihood-ratio test is that the LM test only requires the computation of the restricted estimator. This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space. Further, because the LM test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the other two tests about the precise nature of the alternative hypothesis. Let L {displaystyle L} be the likelihood function which depends on a univariate parameter θ {displaystyle heta } and let x {displaystyle x} be the data. The score U ( θ ) {displaystyle U( heta )} is defined as The Fisher information is The statistic to test H 0 : θ = θ 0 {displaystyle {mathcal {H}}_{0}: heta = heta _{0}} is S ( θ 0 ) = U ( θ 0 ) 2 I ( θ 0 ) {displaystyle S( heta _{0})={frac {U( heta _{0})^{2}}{I( heta _{0})}}} which has an asymptotic distribution of χ 1 2 {displaystyle chi _{1}^{2}} , when H 0 {displaystyle {mathcal {H}}_{0}} is true. While asymptotically identical, calculating the LM statistic using the outer-gradient-product estimator of the Fisher information matrix can lead to bias in small samples. Note that some texts use an alternative notation, in which the statistic S ∗ ( θ ) = S ( θ ) {displaystyle S^{*}( heta )={sqrt {S( heta )}}} is tested against a normal distribution. This approach is equivalent and gives identical results. where L {displaystyle L} is the likelihood function, θ 0 {displaystyle heta _{0}} is the value of the parameter of interest under the null hypothesis, and C {displaystyle C} is a constant set depending on the size of the test desired (i.e. the probability of rejecting H 0 {displaystyle H_{0}} if H 0 {displaystyle H_{0}} is true; see Type I error).

[ "Likelihood-ratio test", "Maximum likelihood", "Statistical hypothesis testing", "test", "Information matrix test", "White test", "Brown–Forsythe test", "Multinomial test", "Likelihood principle" ]
Parent Topic
Child Topic
    No Parent Topic