logo
    Bias attenuation results for dichotomization of a continuous confounder
    0
    Citation
    9
    Reference
    10
    Related Paper
    Abstract:
    Abstract It is well-known that dichotomization can cause bias and loss of efficiency in estimation. One can easily construct examples where adjusting for a dichotomized confounder causes bias in causal estimation. There are additional examples in the literature where adjusting for a dichotomized confounder can be more biased than not adjusting at all. The message is clear, do not dichotomize. What is unclear is if there are scenarios where adjusting for the dichotomized confounder always leads to lower bias than not adjusting. We propose several sets of conditions that characterize scenarios where one should always adjust for the dichotomized confounder to reduce bias. We then highlight scenarios where the decision to adjust should be made more cautiously. To our knowledge, this is the first formal presentation of conditions that give information about when one should and potentially should not adjust for a dichotomized confounder.
    Keywords:
    Information bias
    Researchers conducting observational studies need to consider 3 types of biases: selection bias, information bias, and confounding bias. A whole arsenal of statistical tools can be used to deal with information and confounding biases. However, methods for addressing selection bias and unmeasured confounding are less developed. In this paper, we propose general bounding formulas for bias, including selection bias and unmeasured confounding. This should help researchers make more prudent interpretations of their (potentially biased) results.
    Bounding overwatch
    Information bias
    Omitted-variable bias
    Citations (10)
    Background: Although use of selective serotonin reuptake inhibitor (SSRI) antidepressants has been associated with hip fractures in claims data studies, it has been suggested that such results may be confounded by cognitive and functional status for which no information is available in claims data. Using survey data, we determined the magnitude of such bias and corrected the association between SSRI use and hip fractures accordingly. Methods: We used the Medicare Current Beneficiary Survey to determine the association between SSRI use and 5 potential confounding factors not measured in Medicare claims data: body mass index, smoking, activities of daily living score, cognitive impairment, and Rosow-Breslau physical impairment scale. For 7126 participants aged ≥65 years, we estimated the association between SSRI use and these potential confounders. Combined with literature estimates of the associations between confounders and hip fractures, we were able to compute the extent of residual confounding bias caused by a failure to adjust for these factors. Results: Comparing SSRI users with nonusers, there was considerable overestimation of an association with hip fractures if activities of daily living scores (+21.5% bias) or Rosow-Breslau impairment scales (+10.6%) are unmeasured in claims data. All 5 unmeasured confounders together resulted in net confounding of +9.6% (range: −0.3% to + 39%). After correction for this bias, the strength of association observed in claims data after bias correction (RR = 1.8; 95% CI = 1.5 to 2.1) was comparable to a recent clinical study (RR = 1.5; 0.6 to 3.8), but the claims data study achieved formal statistical significance due to its much large size (8239 vs. 288 hip fractures). Conclusions: Epidemiologic claims data studies tend to overestimate the relation between antidepressant use and hip fractures. However, after correcting for such bias, a significant association persists.
    Information bias
    Hip Fracture
    This chapter introduces the issues of error and bias in statistical epidemiology. The types of error and bias and how an epidemiologist deals with them are described. The information and selection biases and how they happen are explained. Confounding and ways to identify a confounder are discussed.
    Information bias
    Abstract Epidemiological studies are prone to error, because they usually study complex matters in human populations in natural settings and not in laboratory conditions. Bias may be thought of as error which affects comparison groups unequally or leads to inappropriate inferences about one group compared with another. Three broad problems confront epidemiologists: selection of study populations, quality of information, and confounding. Selection and imperfect information cause biases. Confounding is not an error or bias as normally understood, but it leads to errors of data interpretation. The different epidemiological research designs have similar problems with error and bias, which are mostly inherent in the survey and disease registration methods. Principles which apply to all studies and help to minimize these errors are also similar. The chronology and structure of a research project offers a pragmatic framework for the systematic analysis of error bias and confounding.
    Information bias
    Information bias
    Effect modification
    Omitted-variable bias
    Association (psychology)
    Researchers should design epidemiologic studies in such a way as to avoid or minimize known or suspected biases. They should acknowledge unavoidable biases and explain how they may affect results. Careful selection of the control group can minimize or avoid biases in case control studies. Forms of selection bias include self-selection bias, diagnostic suspicion bias, and assembly (susceptibility) bias. The process of acquiring needed data can produce information bias. Forms of information bias are recall bias caused by selective memory and surveillance bias. Confounding occurs when two exposures or processes occur simultaneously and the effect of one is obscured by or distorted by the effect of the other. The confounding variable may misconstrue the apparent relationship between the exposure under study and the outcome of interest. If the research has measured the confounding variable, its effects can be disentangled. Age is the most common and important confounding variable. Since age tends to be related to both exposures and outcomes, researchers need to match subjects by age or to control for age in the analysis. Age may be a confounding variable in some case control studies between oral contraceptives (OCs) and cervical cancer. The risk of disease is reported as the odds ratio in case control studies, while it is the relative risk for cohort studies. It is best to use well-designed studies and large sample sizes to find statistically significant strengths of association. Meta-analysis is used more and more to increase sample sizes but the individual study populations and the variables are often very different. In fact, the studies in the meta-analysis tend to be confounders. Clinicians should consider the aforementioned concerns when interpreting the results of epidemiologic studies. They must be prepared to address validity and clinical relevance. To do so, they need to be familiar with basic study designs and associated issues to provide appropriate counseling and informed clinical decision making.
    Recall bias
    Information bias
    Omitted-variable bias
    Odds
    Citations (12)
    Confounding adjustment is important for observational studies to derive valid effect estimates for inference. Despite the theoretical advancement of confounding selection procedure, it is often challenging to distinguish between confounders and mediators due to the lack of information about the time-ordering and latency of each variable in the data. This is also the case for the studies of perfluoroalkyl substances (PFAS), a group of synthetic chemicals used in industry and consumer products that are persistent and have endocrine-disrupting properties on health outcomes. In this article, we used directed acyclic graphs to describe potential biases introduced by adjusting for or stratifying by the measure of obesity as an intermediate variable in PFAS exposure analyses. We compared results with or without adjusting for body mass index in two cross-sectional data analyses: (1) PFAS levels and maternal thyroid function during early pregnancy using the Danish National Birth Cohort and (2) PFAS levels and cardiovascular disease in adults using the National Health and Nutrition Examination Survey. In these examples, we showed that the potential heterogeneity observed in stratified analyses by overweight or obese status needs to be interpreted cautiously considering collider stratification bias. This article highlights the complexity of seemingly simple adjustment or stratification analyses, and the need for careful consideration of the confounding and/or mediating role of obesity in PFAS studies.
    Information bias
    Citations (27)