language-icon Old Web
English
Sign In

Hat matrix

In statistics, the projection matrix ( P {displaystyle mathbf {P} } ), sometimes also called the influence matrix or hat matrix ( H {displaystyle mathbf {H} } ), maps the vector of response values (dependent variable values) to the vector of fitted values (or predicted values). It describes the influence each response value has on each fitted value. The diagonal elements of the projection matrix are the leverages, which describe the influence each response value has on the fitted value for that same observation. In statistics, the projection matrix ( P {displaystyle mathbf {P} } ), sometimes also called the influence matrix or hat matrix ( H {displaystyle mathbf {H} } ), maps the vector of response values (dependent variable values) to the vector of fitted values (or predicted values). It describes the influence each response value has on each fitted value. The diagonal elements of the projection matrix are the leverages, which describe the influence each response value has on the fitted value for that same observation. If the vector of response values is denoted by y {displaystyle mathbf {y} } and the vector of fitted values by y ^ {displaystyle mathbf {hat {y}} } , As y ^ {displaystyle mathbf {hat {y}} } is usually pronounced 'y-hat', the projection matrix is also named hat matrix as it 'puts a hat on y {displaystyle mathbf {y} } '. The formula for the vector of residuals u {displaystyle mathbf {u} } can also be expressed compactly using the projection matrix: where I {displaystyle mathbf {I} } is the identity matrix. The matrix M ≡ ( I − P ) {displaystyle mathbf {M} equiv left(mathbf {I} -mathbf {P} ight)} is sometimes referred to as the residual maker matrix. Moreover, the element in the ith row and jth column of P {displaystyle mathbf {P} } is equal to the covariance between the jth response value and the ith fitted value, divided by the variance of the former: Therefore, the covariance matrix of the residuals u {displaystyle mathbf {u} } , by error propagation, equals where Σ {displaystyle mathbf {Sigma } } is the covariance matrix of the error vector (and by extension, the response vector as well). For the case of linear models with independent and identically distributed errors in which Σ = σ 2 I {displaystyle mathbf {Sigma } =sigma ^{2}mathbf {I} } , this reduces to:

[ "Covariance function", "Estimation of covariance matrices", "Covariance intersection", "Scatter matrix", "CMA-ES" ]
Parent Topic
Child Topic
    No Parent Topic