A typical clinical study is an experiment designed to test a hypothesis of no difference between groups, with an alternative hypothesis that a difference exists.
2012
1When a clinical study compares one treatment to a placebo or to a different treatment, it is essential to minimize possible factors that might confound the result. For this reason, most clinical studies randomize the assignment of study subjects across study arms. The alternative — enrolling matched, essentially identical, subjects across arms — is limited, usually impractical, and prone to selection bias. There are three types of variables to consider when designing a clinical study: The independent variable controlled by the experiment (two or more treatments) A dependent variable (the outcome of interest, e.g., the presence or absence of a disease) Potentially confounding variables (other factors that could impact the dependent variable but are not the focus of the study) Collectively, these potentially confounding variables are referred to as statistical “noise.” Making sure that statistical noise is distributed evenly between treatments is essential to measure the relationship between a treatment difference and a result of interest. The primary tool for achieving this even distribution is randomization. True randomization means that any given study subject has an equal chance of receiving either treatment prior to treatment assignment. Randomization can be augmented by other methods to help ensure that statistical noise is distributed evenly between treatments and that the sample for each arm is truly representative of the target population. We will start by discussing a clinical study that compares two treatments across two groups of the same size. The same principles discussed here also apply to study designs with more than two treatment groups and where the study design stipulates that treatment groups are not the same size. Treatments affect outcomes, but so do a host of other factors, such as the subject’s age, gender, genetic predisposition, and nutrition. The treatment variable (experimental or control) is referred to as the “independent” variable, and the outcome of interest is measured by a “dependent” variable. The experimenter controls the “independent” variable and is interested in the effect of this variable on the “dependent” variable. The trick is to isolate the effect of the treatment difference — the independent variable — against the background of “noise” caused by the other factors, some of which are unknown. With careful planning, the cause-and-effect relationship between the independent and dependent variable can be measured accurately. Rather than trying to create two matched samples by manipulating all these noise factors, we assign subjects randomly. The assumption is that, with enough study subjects, any differences between the two treatment arms prior to the introduction of the treatment difference will be inconsequential. An additional benefit of randomization is that important variables affecting outcome that have gone unnoticed will also be distributed evenly between treatment groups in samples of adequate size. Isolating the treatment difference adequately creates a “strong inference” that rules out alternative explanations for the treatment difference observed in the dependent variable. In other words, the only possible reasons for the difference are the treatment difference and pure chance. 2 (Pure chance in this context refers to the likelihood of finding a difference at least as large as the observed difference.) It is only when strong
Keywords:
- Correction
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI