The implications of unconfounding multisource performance ratings.
2019
The reliability of job performance ratings is a divisive topic in applied psychology because commonly reported reliability estimates are low and because such reliability estimates are often used to correct validity coefficients (LeBreton, Scherer, & James, 2014). In previous research, attention has been given to the multifaceted nature of multisource job performance ratings. However, measurement-design-relevant effects have been confounded in previous research on this topic. In separate samples from 2 different applications and measurement designs, we unconfounded effects relevant to multisource performance ratings using a Bayesian generalizability theory approach. Our results suggest that the main contributors to reliability in multisource ratings are source-related and general performance effects. Conservative estimates for reliability based on our results were in the range of .81 to .84. We raise questions for future research about corrections for validity coefficients based on criterion unreliability and about reconsidering the measurement design formally applied to multisource ratings.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
57
References
1
Citations
NaN
KQI