Using Monte Carlo Experiments to Select Meta-Analytic Estimators
2020
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1,620 individual experiments, where each experiment is defined by a unique combination of sample size, effect heterogeneity, effect size, publication selection mechanism, and other research characteristics. We compare eleven estimators commonly used in medicine, psychology, and the social sciences. These are evaluated on the basis of bias, mean squared error (MSE), and coverage rates. For our experimental design, we reproduce simulation environments from four recent studies: Stanley, Doucouliagos, & Ioannidis (2017), Alinaghi & Reed (2018), Bom & Rachinger (2019), and Carter et al. (2019a). We demonstrate that relative estimator performance differs across performance measures. An estimator that may be especially good with respect to MSE may perform relatively poorly with respect to coverage rates. We also show that sample size and effect heterogeneity are important determinants of relative estimator performance. We use these results to demonstrate how the observable characteristics of sample size and effect heterogeneity can guide the meta-analyst in choosing the estimators most appropriate for their research circumstances. All of the programming code and output files associated with this project are available at https://osf.io/pr4mb/.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
27
References
3
Citations
NaN
KQI