Interrater reliability and convergent validity of the American Academy for Cerebral Palsy and Developmental Medicine methodology for conducting systematic reviews

2012 
Aim  The aim of this study was to evaluate the interrater reliability and convergent validity of the American Academy for Cerebral Palsy and Developmental Medicine’s (AACPDM) methodology for conducting systematic reviews (group design studies). Method  Four clinicians independently rated 24 articles for the level of evidence and conduct using AACPDM methodology. Study conduct was also assessed using the Effective Public Health Practice Project scale. Raters were randomly assigned to one of two pairs to resolve discrepancies. The level of agreement between individual raters and pairs was calculated using kappa (α=0.05) and intraclass correlations (ICCs; α=0.05). Spearman’s rank correlation coefficient was calculated to evaluate the relationship between raters’ categorization of quality categories using the two tools. Results  There was acceptable agreement between raters (κ=0.77; p<0.001; ICC=0.90) and between assigned pairs (κ=0.83; p<0.001; ICC=0.96) for the level of evidence ratings. There was acceptable agreement between pairs for four of the seven conduct questions (κ=0.53–0.87). ICCs (all raters) for conduct category ratings (weak, moderate, and strong) also indicated good agreement (ICC=0.76). Spearman’s rho indicated a significant positive correlation for the overall quality category comparisons of the two tools (0.52; p<0.001). Conclusions  The AACPDM rating system has acceptable interrater reliability. Evaluation of its study quality ratings demonstrated reasonable agreement when compared with a similar tool.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    22
    Citations
    NaN
    KQI
    []