Meta‐analysis of test accuracy studies using imputation for partial reporting of multiple thresholds
2018
Introduction
For tests reporting continuous results, primary studies usually provide test performance at multiple but often different thresholds. This creates missing data when performing a meta-analysis at each threshold. A standard meta-analysis (NI: No Imputation) ignores such missing data. A Single Imputation (SI) approach was recently proposed to recover missing threshold results. Here, we propose a new method (MIDC) that performs Multiple Imputation of the missing threshold results using Discrete Combinations.
Methods
The new MIDC method imputes missing threshold results by randomly selecting from the set of all possible discrete combinations which lie between the results for two known bounding thresholds. Imputed and observed results are then synthesised at each threshold. This is repeated multiple times, and the multiple pooled results at each threshold are combined using Rubin's rules to give final estimates. We compared the NI, SI and MIDC approaches via simulation.
Results
Both imputation methods outperform the NI method in simulations. There was generally little difference in the SI and MIDC methods, but the latter was noticeably better in terms of estimating the between-study variances and generally gave better coverage, due to slightly larger standard errors of pooled estimates. Given selective reporting of thresholds, the imputation methods also reduced bias in the summary ROC curve. Simulations demonstrate the imputation methods rely on an equal threshold spacing assumption. A real example is presented.
Conclusions
The SI and, in particular, MIDC methods can be used to examine the impact of missing threshold results in meta-analysis of test accuracy studies.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
24
References
9
Citations
NaN
KQI