Publication in peer-reviewed journals is an essential step in the scientific process. However, publication is not simply the reporting of facts arising from a straightforward analysis thereof. Authors have broad latitude when writing their reports and may be tempted to consciously or unconsciously “spin” their study findings. Spin has been defined as a specific intentional or unintentional reporting that fails to faithfully reflect the nature and range of findings and that could affect the impression the results produce in readers. This article, based on a literature review, reports the various practices of spin from misreporting by “beautification” of methods to misreporting by misinterpreting the results. It provides data on the prevalence of some forms of spin in specific fields and the possible effects of some types of spin on readers’ interpretation and research dissemination. We also discuss why researchers would spin their reports and possible ways to avoid it.
Background Blinding is a cornerstone of treatment evaluation. Blinding is more difficult to obtain in trials assessing nonpharmacological treatment and frequently relies on “creative” (nonstandard) methods. The purpose of this study was to systematically describe the strategies used to obtain blinding in a sample of randomized controlled trials of nonpharmacological treatment. Methods and Findings We systematically searched in Medline and the Cochrane Methodology Register for randomized controlled trials (RCTs) assessing nonpharmacological treatment with blinding, published during 2004 in high-impact-factor journals. Data were extracted using a standardized extraction form. We identified 145 articles, with the method of blinding described in 123 of the reports. Methods of blinding of participants and/or health care providers and/or other caregivers concerned mainly use of sham procedures such as simulation of surgical procedures, similar attention-control interventions, or a placebo with a different mode of administration for rehabilitation or psychotherapy. Trials assessing devices reported various placebo interventions such as use of sham prosthesis, identical apparatus (e.g., identical but inactivated machine or use of activated machine with a barrier to block the treatment), or simulation of using a device. Blinding participants to the study hypothesis was also an important method of blinding. The methods reported for blinding outcome assessors relied mainly on centralized assessment of paraclinical examinations, clinical examinations (i.e., use of video, audiotape, photography), or adjudications of clinical events. Conclusions This study classifies blinding methods and provides a detailed description of methods that could overcome some barriers of blinding in clinical trials assessing nonpharmacological treatment, and provides information for readers assessing the quality of results of such trials.
Objective To evaluate the impact of non-blinded outcome assessment on estimated treatment effects in randomised clinical trials with binary outcomes. Design Systematic review of trials with both blinded and non-blinded assessment of the same binary outcome. For each trial we calculated the ratio of the odds ratios—the odds ratio from non-blinded assessments relative to the corresponding odds ratio from blinded assessments. A ratio of odds ratios <1 indicated that non-blinded assessors generated more optimistic effect estimates than blinded assessors. We pooled the individual ratios of odds ratios with inverse variance random effects meta-analysis and explored reasons for variation in ratios of odds ratios with meta-regression. We also analysed rates of agreement between blinded and non-blinded assessors and calculated the number of patients needed to be reclassified to neutralise any bias. Data Sources PubMed, Embase, PsycINFO, CINAHL, Cochrane Central Register of Controlled Trials, HighWire Press, and Google Scholar. Eligibility criteria for selecting studies Randomised clinical trials with blinded and non-blinded assessment of the same binary outcome. Results We included 21 trials in the main analysis (with 4391 patients); eight trials provided individual patient data. Outcomes in most trials were subjective—for example, qualitative assessment of the patient’s function. The ratio of the odds ratios ranged from 0.02 to 14.4. The pooled ratio of odds ratios was 0.64 (95% confidence interval 0.43 to 0.96), indicating an average exaggeration of the non-blinded odds ratio by 36%. We found no significant association between low ratios of odds ratios and scores for outcome subjectivity (P=0.27); non-blinded assessor’s overall involvement in the trial (P=0.60); or outcome vulnerability to non-blinded patients (P=0.52). Blinded and non-blinded assessors agreed in a median of 78% of assessments (interquartile range 64-90%) in the 12 trials with available data. The exaggeration of treatment effects associated with non-blinded assessors was induced by the misclassification of a median of 3% of the assessed patients per trial (1-7%). Conclusions On average, non-blinded assessors of subjective binary outcomes generated substantially biased effect estimates in randomised clinical trials, exaggerating odds ratios by 36%. This bias was compatible with a high rate of agreement between blinded and non-blinded outcome assessors and driven by the misclassification of few patients.
Evidence based medicine: does it make a difference? Use wiselyEditor-The term "evidence based medicine" entered the scientific lexicon only a little more than a decade ago. 1 What caused its remarkable spread, and what are the implications of its broad and rapid diffusion?The team that coined the term at first considered using the phrase "scientific medicine" but rejected it because it implied that other approaches were by definition unscientific. 2However, critics have argued that the term evidence based medicine carries a similar moral valence and linguistic slipperiness. 3 Who could argue against the notion of providing care that integrates individual clinical skill and the best external evidence? 4 Originally developed as a method for teaching medical residents, evidence based medicine is being applied ever more broadly to the organisation and delivery of medical services.Multiple stakeholders now seek to assume its mantle for purposes that often contradict its original intent.Managers, equating lack of evidence with lack of effectiveness, use it as a rationale for cutting services.Industry generates evidence of questionable quality to promote its products.Medical researchers come to believe that they hold a monopoly on generating and interpreting evidence.Evidence based medicine, developed as a means of taming the unscientific and messy world of clinical practice, has itself entered the unscientific and messy world of politics. 5ike any technology, evidence based medicine carries risks and benefits and can be used appropriately or inappropriately.Overly inclusive definitions threaten to deprive the term of meaning, and unchecked use increases the risk of misuse.In the past decade, evidence based medicine has contributed much to how we teach, deliver, and think about clinical services.In the coming decade, we must continue to ensure that it is not only used widely but wisely.
To develop and validate patient-reported instruments, based on patients' lived experiences, for monitoring the symptoms and impact of long coronavirus disease (covid).The long covid Symptom and Impact Tools (ST and IT) were constructed from the answers to a survey with open-ended questions to 492 patients with long COVID. Validation of the tools involved adult patients with suspected or confirmed coronavirus disease 2019 (COVID-19) and symptoms extending over 3 weeks after onset. Construct validity was assessed by examining the relations of the ST and IT scores with health-related quality of life (EQ-5D-5L), function (PCFS, post-COVID functional scale), and perceived health (MYMOP2, Measure yourself medical outcome profile 2). Reliability was determined by a test-retest. The "patient acceptable symptomatic state" (PASS) was determined by the percentile method.Validation involved 1022 participants (55% with confirmed COVID-19, 79% female, and 12.5% hospitalized for COVID-19). The long COVID ST and IT scores were strongly correlated with the EQ-5D-5L (rs = -0.45 and rs = -0.59, respectively), the PCFS (rs = -0.39 and rs = -0.55), and the MYMOP2 (rs = -0.40 and rs = -0.59). Reproducibility was excellent with an interclass correlation coefficient of 0.83 (95% confidence interval .80 to .86) for the ST score and 0.84 (.80 to .87) for the IT score. In total, 793 (77.5%) patients reported an unacceptable symptomatic state, thereby setting the PASS for the long covid IT score at 30 (28 to 33).The long covid ST and IT tools, constructed from patients' lived experiences, provide the first validated and reliable instruments for monitoring the symptoms and impact of long covid.
Design.Issues of importance to patients and possible scale items were generated by literature review and non-structured interviews of patients, former patients, health care providers and researchers. Semi-structured interviews with inpatients and pilot studies were conducted to modify or remove ambiguous questions and reduce skewed responses. A study was then made to select from these questions relevant items and variables correlated to patient evaluation of quality of care. A principal-components analysis was performed to select items and assess construct validity. Cronbach's α coefficients were calculated to estimate the reliability of the scale. Time reliability and concurrent validity were also considered.
Abstract Randomized controlled trials (RCTs) are essential to support clinical decision making. We assessed the transparency, completeness and consistency of reporting of 244 reports (120 peer-reviewed journal publications; 124 preprints) of RCTs assessing pharmacological interventions for the treatment of COVID-19 published the first 17 months of the pandemic (up to May 31, 2021). Transparency was poor. Only 55% of trials were prospectively registered; 39% made their full protocols available and 29% provided access to their statistical analysis plan. Only 6% completely reported the most important information. Primary outcome(s) reported in trial registries and published reports were inconsistent in 47% of trials. Of the 124 RCTs published as preprint, 76 were secondarily published in a peer-reviewed journal. There was no major improvement after the peer-review process. Lack of transparency, completeness and consistency of reporting is an important barrier to trust, interpretation and synthesis in COVID-19 clinical trials.