THE (NON‐)SIGNIFICANCE OF REPORTING ERRORS IN ECONOMICS: EVIDENCE FROM THREE TOP JOURNALS

2020 
We investigate the prevalence and sources of reporting errors in 30,993 hypothesis tests from 370 articles in three top economics journals. We define reporting errors as inconsistencies between reported significance levels by means of eye‐catchers and calculated p‐values based on reported statistical values, such as coefficients and standard errors. While 35.8% of the articles contain at least one reporting error, only 1.3% of the investigated hypothesis tests are afflicted by reporting errors. For strong reporting errors for which either the eye‐catcher or the calculated p‐value signals statistical significance but the respective other one does not, the error rate is 0.5% for the investigated hypothesis tests corresponding to 21.6% of the articles having at least one strong reporting error. Our analysis suggests a bias in favor of errors for which eye‐catchers signal statistical significance but calculated p‐values do not. Survey responses from the respective authors, replications, and exploratory regression analyses indicate some solutions to mitigate the prevalence of reporting errors in future research.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    37
    References
    0
    Citations
    NaN
    KQI
    []