logo
    Application of the percentage of non-overlapping data (PND) in systematic reviews and meta-analyses: A systematic review of reporting characteristics
    81
    Citation
    81
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    The percentage of non-overlapping data (PND; Scruggs, Mastropieri, & Casto, 1987) is one of several outcome metrics for aggregating data across studies using single-subject experimental designs. The application of PND requires the systematic reviewer to make various decisions related to the inclusion of studies, extraction of data, and analysis and interpretation of data. The purpose of this systematic review was to determine the reporting characteristics associated with the application of PND in systematic reviews and meta-analyses. The authors engage in a discussion of the reporting characteristics found in the data set and propose several directions for future applications and reporting of PND in systematic reviews.
    Keywords:
    Data extraction
    Systematic error
    Data set
    Systematic reviews and meta-analyses have been proposed as an approach to synthesize the literature and counteract the lack of power of small preclinical studies. We aimed to evaluate (1) the methodology of these reviews, (2) the methodological quality of the studies they included and (3) whether study methodological characteristics affect effect size. We searched MEDLINE to retrieve 212 systematic reviews with meta-analyses of preclinical studies published from January, 2018 to March, 2020. Less than 15% explored the grey literature. Selection, data extraction and risk of bias assessment were performed in duplicate in less than two thirds of reviews. Most of them assessed the methodological quality of included studies and reported the meta-analysis model. The risk of bias of included studies was mostly rated unclear. In meta-epidemiological analysis, none of the study methodological characteristics was associated with effect size. The methodological characteristics of systematic reviews with meta-analyses of recently published preclinical studies seem to have improved as compared with previous assessments, but the methodological quality of included studies remains poor, thus limiting the validity of their results. Our meta-epidemiological analysis did not show any evidence of a potential association between methodological characteristics of included studies and effect size.
    Data extraction
    Abstract Introduction: The primary motor cortex (M1) is a key brain region implicated in pain processing. Here, we present a protocol for a review that aims to synthesise and critically appraise the evidence for the effect of experimentalpain on M1 function. Methods/Analysis: A systematic review and meta-analysis will be conducted. Electronic databases will be searched using a predetermined strategy. Studies published before April 2020 that investigate the effects of experimentally induced pain on corticomotor excitability (CME) in healthy individuals will be included if they meet eligibility criteria. Study identification, data extraction andrisk of bias assessment will be conducted by two independent reviewers, with a third reviewer consulted for any disagreements. The primary outcomes will include group level changes in CME and intracortical, transcortical and sensorimotor modulators of CME. A separate analysis using individual data will also be conducted to explore individual differences in CME in response to experimental pain. The meta-analysis will consider the following factors: pain model (transient, tonic, transitional pain), type of painful tissue (cutaneous, musculoskeletal), time points of outcome measures(during or after recovery from pain) and localisation of pain(target area, control area). Discussion: This review will provide a comprehensive understanding of the mechanisms within M1 that mediate experimentally induced pain, both on a group and individual level. Registration Number: The systematic review is registered with the International Prospective Register of Systematic Reviews (#CRD42020173172)
    Data extraction
    Citations (0)
    Concerns have been raised regarding the quality and completeness of abstract reporting in evidence reviews, but this had not been evaluated in meta-analyses of diagnostic accuracy. Our objective was to evaluate reporting quality and completeness in abstracts of systematic reviews with meta-analyses of depression screening tool accuracy, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) for Abstracts tool.Cross-sectional study.We searched MEDLINE and PsycINFO from 1 January 2005 through 13 March 2016 for recent systematic reviews with meta-analyses in any language that compared a depression screening tool to a diagnosis based on clinical or validated diagnostic interview.Two reviewers independently assessed quality and completeness of abstract reporting using the PRISMA for Abstracts tool with appropriate adaptations made for studies of diagnostic test accuracy. Bivariate associations of number of PRISMA for Abstracts items complied with (1) journal abstract word limit and (2) A Measurement Tool to Assess Systematic Reviews (AMSTAR) scores of meta-analyses were also assessed.We identified 21 eligible meta-analyses. Only two of 21 included meta-analyses complied with at least half of adapted PRISMA for Abstracts items. The majority met criteria for reporting an appropriate title (95%), result interpretation (95%) and synthesis of results (76%). Meta-analyses less consistently reported databases searched (43%), associated search dates (33%) and strengths and limitations of evidence (19%). Most meta-analyses did not adequately report a clinically meaningful description of outcomes (14%), risk of bias (14%), included study characteristics (10%), study eligibility criteria (5%), registration information (5%), clear objectives (0%), report eligibility criteria (0%) or funding (0%). Overall meta-analyses quality scores were significantly associated with the number of PRISMA for Abstracts scores items reported adequately (r=0.45).Quality and completeness of reporting were found to be suboptimal. Journal editors should endorse PRISMA for Abstracts and allow for flexibility in abstract word counts to improve quality of abstracts.
    PsycINFO
    Data extraction
    Citations (12)
    Abstract:It is often advisable for researchers to use an existing data set to answer research questions.  In particular, using an existing data set can help a researcher obtain results much more quickly, at a lower cost, and without exposing new research subjects to many of the potential harms associated with research participation.  However, the many researchers seeking to use an existing data set face a variety of challenges specific to this research methodology.  This article reviews some of the key differences associated with using an existing data set as compared with those conducting research by recruiting research subjects.  Advantages and disadvantages associated with the use of existing data sets are discussed as are ethical issues, strategies to obtain an optimal data set, and special considerations associated with this methodology.  Additionally, suggestions are given relevant to reporting results when conducting research using an existing data set or a “secondary analysis”.
    Data set
    Research Data
    Citations (12)
    The percentage of non-overlapping data (PND; Scruggs, Mastropieri, & Casto, 1987) is one of several outcome metrics for aggregating data across studies using single-subject experimental designs. The application of PND requires the systematic reviewer to make various decisions related to the inclusion of studies, extraction of data, and analysis and interpretation of data. The purpose of this systematic review was to determine the reporting characteristics associated with the application of PND in systematic reviews and meta-analyses. The authors engage in a discussion of the reporting characteristics found in the data set and propose several directions for future applications and reporting of PND in systematic reviews.
    Data extraction
    Systematic error
    Data set
    Citations (81)
    BACKGROUND: Researchers in evidence-based medicine cannot keep up with the amounts of both old and newly published primary research articles. Support for the early stages of the systematic review process – searching and screening studies for eligibility – is necessary because it is currently impossible to search for relevant research with precision. Better automated data extraction may not only facilitate the stage of review traditionally labelled ‘data extraction’, but also change earlier phases of the review process by making it possible to identify relevant research. Exponential improvements in computational processing speed and data storage are fostering the development of data mining models and algorithms. This, in combination with quicker pathways to publication, led to a large landscape of tools and methods for data mining and extraction. OBJECTIVE: To review published methods and tools for data extraction to (semi)automate the systematic reviewing process. METHODS: We propose to conduct a living review. With this methodology we aim to do constant evidence surveillance, bi-monthly search updates, as well as review updates every 6 months if new evidence permits it. In a cross-sectional analysis we will extract methodological characteristics and assess the quality of reporting in our included papers. CONCLUSIONS: We aim to increase transparency in the reporting and assessment of automation technologies to the benefit of data scientists, systematic reviewers and funders of health research. This living review will help to reduce duplicate efforts by data scientists who develop data mining methods. It will also serve to inform systematic reviewers about possibilities to support their data extraction.
    Data extraction
    Citations (1)
    Introduction High-quality synthesized evidence of sweet taste analgesia in neonates exists. However, Chinese databases have never been included in previous systematic reviews of sweet solutions for procedural pain. Objective To conduct a systematic review of Chinese literature evaluating analgesic effects of sweet solutions for neonates. Data sources: Wang Fang, China National Knowledge Infrastructure and Chinese Biomedical Literature Database. Data extraction and analysis: Two authors screened studies for inclusion and conducted risk of bias ratings and data extraction. A third author resolved any conflicts. Meta-analyses were performed using RevMan 5.2 software, on mean differences in pain outcomes using random effects models. Results Thirty-one trials (4999 neonates) were included; 26 trials used glucose, 4 used sucrose, and 1 trial evaluated both solutions. Sweet solutions reduced standardized mean pain scores (n = 21 studies; −1.68, 95% confidence interval −2.08, −1.27) and cry duration (n = 6 studies; −25.60, 95% confidence interval −36.47, −14.72 s) but not heart rate change (n = 7 studies; −17.64, 95% confidence interval −52.71, 17.43). No included studies cited the previously published systematic reviews of sweet solutions. Conclusions This systematic review of Chinese databases showed the same results as previously published systematic reviews. No trials included in this review cited the English systematic reviews, highlighting a parallel research agenda.
    Data extraction
    Citations (18)
    Background Redundancy is an unethical, unscientific, and costly challenge in clinical health research. There is a high risk of redundancy when existing evidence is not used to justify the research question when a new study is initiated. Therefore, the aim of this study was to synthesize meta-research studies evaluating if and how authors of clinical health research studies use systematic reviews when initiating a new study. Methods Seven electronic bibliographic databases were searched (final search June 2021). Meta-research studies assessing the use of systematic reviews when justifying new clinical health studies were included. Screening and data extraction were performed by two reviewers independently. The primary outcome was defined as the percentage of original studies within the included meta-research studies using systematic reviews of previous studies to justify a new study. Results were synthesized narratively and quantitatively using a random-effects meta-analysis. The protocol has been registered in Open Science Framework ( https://osf.io/nw7ch/ ). Results Twenty-one meta-research studies were included, representing 3,621 original studies or protocols. Nineteen of the 21 studies were included in the meta-analysis. The included studies represented different disciplines and exhibited wide variability both in how the use of previous systematic reviews was assessed, and in how this was reported. The use of systematic reviews to justify new studies varied from 16% to 87%. The mean percentage of original studies using systematic reviews to justify their study was 42% (95% CI: 36% to 48%). Conclusion Justification of new studies in clinical health research using systematic reviews is highly variable, and fewer than half of new clinical studies in health science were justified using a systematic review. Research redundancy is a challenge for clinical health researchers, as well as for funders, ethics committees, and journals.
    Data extraction
    Research Design
    Abstract Background Results of new studies should be interpreted in the context of what is already known to compare results and build the state of the science. This systematic review and meta-analysis aimed to identify and synthesise results from meta-research studies examining if original studies within health use systematic reviews to place their results in the context of earlier, similar studies. Methods We searched MEDLINE (OVID), EMBASE (OVID), and the Cochrane Methodology Register for meta-research studies reporting the use of systematic reviews to place results of original clinical studies in the context of existing studies. The primary outcome was the percentage of original studies included in the meta-research studies using systematic reviews or meta-analyses placing new results in the context of existing studies. Two reviewers independently performed screening and data extraction. Data were synthesised using narrative synthesis and a random-effects meta-analysis was performed to estimate the mean proportion of original studies placing their results in the context of earlier studies. The protocol was registered in Open Science Framework. Results We included 15 meta-research studies, representing 1724 original studies. The mean percentage of original studies within these meta-research studies placing their results in the context of existing studies was 30.7% (95% CI [23.8%, 37.6%], I 2 =87.4%). Only one of the meta-research studies integrated results in a meta-analysis, while four integrated their results within a systematic review; the remaining cited or referred to a systematic review. The results of this systematic review are characterised by a high degree of heterogeneity and should be interpreted cautiously. Conclusion Our systematic review demonstrates a low rate of and great variability in using systematic reviews to place new results in the context of existing studies. On average, one third of the original studies contextualised their results. Improvement is still needed in researchers’ use of prior research systematically and transparently—also known as the use of an evidence-based research approach, to contribute to the accumulation of new evidence on which future studies should be based. Systematic review registration Open Science registration number https://osf.io/8gkzu/
    Data extraction
    Research Design
    Citations (11)