For many years it has been claimed that observational studies find stronger treatment effects than randomized, controlled trials. We compared the results of observational studies with those of randomized, controlled trials.
Literature review and survey of spine surgeons.To identify reasons for variation in results among observational studies of spinal surgery.Orthopedic treatments are often evaluated by observational studies rather than randomized controlled trials. The value of observational studies is debated.A literature search was performed to find several observational studies that compared the same spinal surgeries. Possible confounders for these studies were identified by a survey of spinal surgeons. Study characteristics from these articles were tested for an association with study results.Most observational studies were case series. Articles studied in depth included 20 evaluating chemonucleolysis and 14 evaluating spinal arthrodesis for patients who had herniated disc or spinal stenosis. For each treatment comparison, results varied from strongly favoring one treatment to strongly favoring the other. Apparent causes of the variation were patient selection criteria, the choice of outcome measure, and follow-up rate. Few studies reported on the potential confounders identified by physician surveys, and only one study used statistical methods to reduce the influence of confounding.The results suggest that review of several comparable observational studies may help evaluate treatment, identify patient types most likely to benefit from a give treatment, and provide information about study features that can improve the design of subsequent observational or randomized controlled studies. The potential of comparative observational studies has not been realized because of current inadequacies in their design, analysis, and reporting.
Abstract Background Previous studies have assessed the validity of the observational study design by comparing results of studies using this design to results from randomized controlled trials. The present study examined design features of observational studies that could have influenced these comparisons. Methods To find at least 4 observational studies that evaluated the same treatment, we reviewed meta-analyses comparing observational studies and randomized controlled trials for the assessment of medical treatments. Details critical for interpretation of these studies were abstracted and analyzed qualitatively. Results Individual articles reviewed included 61 observational studies that assessed 10 treatment comparisons evaluated in two studies comparing randomized controlled trials and observational studies. The majority of studies did not report the following information: details of primary and ancillary treatments, outcome definitions, length of follow-up, inclusion/exclusion criteria, patient characteristics relevant to prognosis or treatment response, or assessment of possible confounding. When information was reported, variations in treatment specifics, outcome definition or confounding were identified as possible causes of differences between observational studies and randomized controlled trials, and of heterogeneity in observational studies. Conclusion Reporting of observational studies of medical treatments was often inadequate to compare study designs or allow other meaningful interpretation of results. All observational studies should report details of treatment, outcome assessment, patient characteristics, and confounding assessment.