Reporting bias in the literature on the associations of health-related behaviors and statins with cardiovascular disease and all-cause mortality
Leandro F. M. RezendeJuan Pablo Rey-LópezThiago Hérick de SáNicholas ChartresAlice FabbriLauren PowellEmmanuel StamatakisLisa Bero
15
Citation
99
Reference
10
Related Paper
Citation Trend
Abstract:
Reporting bias in the literature occurs when there is selective revealing or suppression of results, influenced by the direction of findings. We assessed the risk of reporting bias in the epidemiological literature on health-related behavior (tobacco, alcohol, diet, physical activity, and sedentary behavior) and cardiovascular disease mortality and all-cause mortality and provided a comparative assessment of reporting bias between health-related behavior and statin (in primary prevention) meta-analyses. We searched Medline, Embase, Cochrane Methodology Register Database, and Web of Science for systematic reviews synthesizing the associations of health-related behavior and statins with cardiovascular disease mortality and all-cause mortality published between 2010 and 2016. Risk of bias in systematic reviews was assessed using the ROBIS tool. Reporting bias in the literature was evaluated via small-study effect and excess significance tests. We included 49 systematic reviews in our study. The majority of these reviews exhibited a high overall risk of bias, with a higher extent in health-related behavior reviews, relative to statins. We reperformed 111 meta-analyses conducted across these reviews, of which 65% had statistically significant results (P < 0.05). Around 22% of health-related behavior meta-analyses showed small-study effect, as compared to none of statin meta-analyses. Physical activity and the smoking research areas had more than 40% of meta-analyses with small-study effect. We found evidence of excess significance in 26% of health-related behavior meta-analyses, as compared to none of statin meta-analyses. Half of the meta-analyses from physical activity, 26% from diet, 18% from sedentary behavior, 14% for smoking, and 12% from alcohol showed evidence of excess significance bias. These biases may be distorting the body of evidence available by providing inaccurate estimates of preventive effects on cardiovascular and all-cause mortality.Keywords:
Reporting bias
Meta-regression
Reporting bias
Funnel plot
Cite
Citations (17)
Abstract Publication bias refers to a systematic deviation from the truth in the results of a meta-analysis due to the higher likelihood for published studies to be included in meta-analyses than unpublished studies. Publication bias can lead to misleading recommendations for decision and policy making. In this education review, we introduce, explain, and provide solutions to the pervasive misuses and misinterpretations of publication bias that afflict evidence syntheses in sport and exercise medicine, with a focus on the commonly used funnel-plot based methods. Publication bias is more routinely assessed by visually inspecting funnel plot asymmetry, although it has been consistently deemed unreliable, leading to the development of statistical tests to assess publication bias. However, most statistical tests of publication bias (i) cannot rule out alternative explanations for funnel plot asymmetry (e.g., between-study heterogeneity, choice of metric, chance) and (ii) are grossly underpowered, even when using an arbitrary minimum threshold of ten or more studies. We performed a cross-sectional meta-research investigation of how publication bias was assessed in systematic reviews with meta-analyses published in the top two sport and exercise medicine journals throughout 2021. This analysis highlights that publication bias is frequently misused and misinterpreted, even in top tier journals. Because of conceptual and methodological problems when assessing and interpreting publication bias, preventive strategies (e.g., pre-registration, registered reports, disclosing protocol deviations, and reporting all study findings regardless of direction or magnitude) offer the best and most efficient solution to mitigate the misuse and misinterpretation of publication bias. Because true publication bias is very difficult to determine, we recommend that future publications use the term “risk of publication bias”.
Funnel plot
Reporting bias
Study heterogeneity
Sports medicine
Evidence-Based Medicine
Cite
Citations (28)
Abstract: Publication bias occurs when results of published studies are systematically different from results of unpublished studies. The term "dissemination bias" has also been recommended to describe all forms of biases in the research-dissemination process, including outcome-reporting bias, time-lag bias, gray-literature bias, full-publication bias, language bias, citation bias, and media-attention bias. We can measure publication bias by comparing the results of published and unpublished studies addressing the same question. Following up cohorts of studies from inception and comparing publication levels in studies with statistically significant or "positive" results suggested greater odds of formal publication in those with such results, compared to those without. Within reviews, funnel plots and related statistical methods can be used to indicate presence or absence of publication bias, although these can be unreliable in many circumstances. Methods of avoiding publication bias, by identifying and including unpublished outcomes and unpublished studies, are discussed and evaluated. These include searching without limiting by outcome, searching prospective trials registers, searching informal sources, including meeting abstracts and PhD theses, searching regulatory body websites, contacting authors of included studies, and contacting pharmaceutical or medical device companies for further studies. Adding unpublished studies often alters effect sizes, but may not always eliminate publication bias. The compulsory registration of all clinical trials at inception is an important move forward, but it can be difficult for reviewers to access data from unpublished studies located this way. Publication bias may be reduced by journals by publishing high-quality studies regardless of novelty or unexciting results, and by publishing protocols or full-study data sets. No single step can be relied upon to fully overcome the complex actions involved in publication bias, and a multipronged approach is required by researchers, patients, journal editors, peer reviewers, research sponsors, research ethics committees, and regulatory and legislation authorities. Keywords: publication bias, reporting bias, research-dissemination bias, evidence synthesis, systematic review, meta-analysis
Funnel plot
Reporting bias
Sampling bias
Information bias
Recall bias
Cite
Citations (221)
Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Methodology/Principal Findings In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Reporting bias
Odds
Information bias
Funnel plot
Evidence-Based Medicine
Cite
Citations (994)
Study publication bias is the decision to publish or not publish a study based on its results. Compared to unpublished work, published studies are more likely to have positive or statistically significant findings. Outcome reporting bias is opting to publish only a subset of the original variables recorded for a study, such that the inclusion of the variables in the published work is selectively based on the results. Statistically significant results have a higher likelihood of being fully reported compared to nonsignificant results, and a significant proportion of published articles describe outcome variables or data analyses that differ from the pre-specified trial protocol as originally conceived. Recognition that publication bias and outcome reporting bias contribute to a distorted perception of drug effects-inflated estimates of efficacy and underreporting of adverse events-has led to the development and expansion of publicly accessible databases that contain transparent information about clinical trials and their results.
Publication
Reporting bias
Information bias
Non-response bias
Cite
Citations (13)
Publication
Cite
Citations (702)
Publication and related biases constitute serious threats to the validity of research synthesis. If research syntheses are based on a biased selection of the available research, there is an increased risk of producing misleading results. The purpose fo this study is to explore the extent of positive outcome bias, time-lag bias, and place-of-publication bias in published research on the effects of psychological, social, and behavioral interventions. The results are based on 527 Swedish outcome trials published in peer-reviewed journals between 1990 and 2019. We found no difference in the number of studies reporting significant compared to non-significant findings or in the number of studies reporting strong effect sizes in the published literature. We found no evidence of time-lag bias or place-of-publication bias in our results. The average reported effect size remained constant over time as did the proportion of studies reporting significant effects.
Reporting bias
Response bias
Cite
Citations (6)
Background The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention. Methodology/Principal Findings We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies. Conclusions Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.
Reporting bias
Odds
Information bias
Funnel plot
Evidence-Based Medicine
Cite
Citations (1,466)
Systematic reviews and meta-analyses are used by clinicians to derive treatment guidelines and make resource allocation decisions in anesthesiology. One cause for concern with such reviews is the possibility that results from unpublished trials are not represented in the review findings or data synthesis. This problem, known as publication bias, results when studies reporting statistically nonsignificant findings are left unpublished and, therefore, not included in meta-analyses when estimating a pooled treatment effect. In turn, publication bias may lead to skewed results with overestimated effect sizes. The primary objective of this study is to determine the extent to which evaluations for publication bias are conducted by systematic reviewers in highly ranked anesthesiology journals and which practices reviewers use to mitigate publication bias. The secondary objective of this study is to conduct publication bias analyses on the meta-analyses that did not perform these assessments and examine the adjusted pooled effect estimates after accounting for publication bias.This study considered meta-analyses and systematic reviews from 5 peer-reviewed anesthesia journals from 2007 through 2015. A PubMed search was conducted, and full-text systematic reviews that fit inclusion criteria were downloaded and coded independently by 2 authors. Coding was then validated, and disagreements were settled by consensus. In total, 207 systematic reviews were included for analysis. In addition, publication bias evaluation was performed for 25 systematic reviews that did not do so originally. We used Egger regression, Duval and Tweedie trim and fill, and funnel plots for these analyses.Fifty-five percent (n = 114) of the reviews discussed publication bias, and 43% (n = 89) of the reviews evaluated publication bias. Funnel plots and Egger regression were the most common methods for evaluating publication bias. Publication bias was reported in 34 reviews (16%). Thirty-six of the 45 (80.0%) publication bias analyses indicated the presence of publication bias by trim and fill analysis, whereas Egger regression indicated publication bias in 23 of 45 (51.1%) analyses. The mean absolute percent difference between adjusted and observed point estimates was 15.5%, the median was 6.2%, and the range was 0% to 85.5%.Many of these reviews reported following published guidelines such as PRISMA or MOOSE, yet only half appropriately addressed publication bias in their reviews. Compared with previous research, our study found fewer reviews assessing publication bias and greater likelihood of publication bias among reviews not performing these evaluations.
Funnel plot
Reporting bias
Study heterogeneity
Meta-regression
Cite
Citations (68)
Background Several scales, checklists and domain-based tools for assessing risk of reporting biases exist, but it is unclear how much they vary in content and guidance. We conducted a systematic review of the content and measurement properties of such tools. Methods We searched for potentially relevant articles in Ovid MEDLINE, Ovid Embase, Ovid PsycINFO and Google Scholar from inception to February 2017. One author screened all titles, abstracts and full text articles, and collected data on tool characteristics. Results We identified 18 tools that include an assessment of the risk of reporting bias. Tools varied in regard to the type of reporting bias assessed (eg, bias due to selective publication, bias due to selective non-reporting), and the level of assessment (eg, for the study as a whole, a particular result within a study or a particular synthesis of studies). Various criteria are used across tools to designate a synthesis as being at ‘high’ risk of bias due to selective publication (eg, evidence of funnel plot asymmetry, use of non-comprehensive searches). However, the relative weight assigned to each criterion in the overall judgement is unclear for most of these tools. Tools for assessing risk of bias due to selective non-reporting guide users to assess a study, or an outcome within a study, as ‘high’ risk of bias if no results are reported for an outcome. However, assessing the corresponding risk of bias in a synthesis that is missing the non-reported outcomes is outside the scope of most of these tools. Inter-rater agreement estimates were available for five tools. Conclusion There are several limitations of existing tools for assessing risk of reporting biases, in terms of their scope, guidance for reaching risk of bias judgements and measurement properties. Development and evaluation of a new, comprehensive tool could help overcome present limitations.
Funnel plot
PsycINFO
Reporting bias
Evidence-Based Medicine
Cite
Citations (236)