language-icon Old Web
English
Sign In

Errors-in-variables models

In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.where (n1,n2) are such that K(n1+1,n2) — the joint cumulant of (x,y) — is not zero. In the case when the third central moment of the latent regressor x* is non-zero, the formula reduces towhere ∘ {displaystyle circ } designates the Hadamard product of matrices, and variables xt, yt have been preliminarily de-meaned. The authors of the method suggest to use Fuller's modified IV estimator.where π0 and σ0 are (unknown) constant matrices, and ζt ⊥ zt. The coefficient π0 can be estimated using standard least squares regression of x on z. The distribution of ζt is unknown, however we can model it as belonging to a flexible parametric family — the Edgeworth series:where it would be possible to compute the integral if we knew the conditional density function ƒx*|x. If this function could be known or estimated, then the problem turns into standard non-linear regression, which can be estimated for example using the NLLS method.Assuming for simplicity that η1, η2 are identically distributed, this conditional density can be computed aswhere wt represents variables measured without errors. The regressor x* here is scalar (the method can be extended to the case of vector x* as well).If not for the measurement errors, this would have been a standard linear model with the estimator In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses. In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the attenuation bias. In non-linear models the direction of the bias is likely to be more complicated.

[ "Algorithm", "Control theory", "Statistics", "Econometrics", "Machine learning", "Berkson error model", "simulation extrapolation" ]
Parent Topic
Child Topic
    No Parent Topic