Misuse of Regression Adjustment for Additional Confounders Following Insufficient Propensity Score Balancing
11
Citation
37
Reference
10
Related Paper
Citation Trend
Abstract:
After propensity score (PS) matching, inverse probability weighting, and stratification or regression adjustment for PS, one may compare different exposure groups with or without further covariate adjustment. In the former case, although a typical application uses the same set of covariates in the PS and the stratification post-PS balancing, several studies adjust for additional confounders in the stratification while ignoring the covariates that have been balanced by the PS. We show the bias arising from such partial adjustments for distinct sets of confounders by PS and regression or stratification. Namely, the stratification or regression after PS balancing causes imbalance in the confounders that have been balanced by the PS if PS-balanced confounders are ignored. We empirically illustrate the bias in the Rotterdam Tumor Bank, in which strong confounders distort the association between chemotherapy and recurrence-free survival. If additional covariates are adjusted for after PS balancing, the covariate sets conditioned in PS should be again adjusted for, or PS should be reestimated by including the additional covariates to avoid bias owing to covariate imbalance.Keywords:
Inverse probability weighting
Stratification (seeds)
Censoring (clinical trials)
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients’ withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan–Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan–Meier approach where dependent censoring is ignored.
Censoring (clinical trials)
Inverse probability
Inverse probability weighting
Cite
Citations (80)
The inverse probability weighting is an important propensity score weighting method to estimate the average treatment effect. Recent literature shows that it can be easily combined with covariate balancing constraints to reduce the detrimental effects of excessively large weights and improve balance. Other methods are available to derive weights that balance covariate distributions between the treatment groups without the involvement of propensity scores. We conducted comprehensive Monte Carlo experiments to study whether the use of covariate balancing constraints circumvent the need for correct propensity score model specification, and whether the use of a propensity score model further improves the estimation performance among methods that use similar covariate balancing constraints. We compared simple inverse probability weighting, two propensity score weighting methods with balancing constraints (covariate balancing propensity score, covariate balancing scoring rule), and two weighting methods with balancing constraints but without using the propensity scores (entropy balancing and kernel balancing). We observed that correct specification of the propensity score model remains important even when the constraints effectively balance the covariates. We also observed evidence suggesting that, with similar covariate balance constraints, the use of a propensity score model improves the estimation performance when the dimension of covariates is large. These findings suggest that it is important to develop flexible data-driven propensity score models that satisfy covariate balancing conditions.
Inverse probability weighting
Cite
Citations (24)
Inverse probability weighting
Marginal structural model
Cite
Citations (565)
Observational data are often readily available or less costly to obtain than conducting a randomized controlled trial. With observational data, investigators may statistically evaluate the relationship between a treatment or therapy and outcomes. However, inherent in observational data is the potential for confounding arising from the nonrandom assignment of treatment. In this statistical grand rounds, we describe the use of propensity score methods (ie, using the probability of receiving treatment given covariates) to reduce bias due to measured confounders in anesthesia and perioperative medicine research. We provide a description of the theory and background appropriate for the anesthesia researcher and describe statistical assumptions that should be assessed in the course of a research study using the propensity score. We further describe 2 propensity score methods for evaluating the association of treatment or therapy with outcomes, propensity score matching and inverse probability of treatment weighting, and compare to covariate-adjusted regression analysis. We distinguish several estimators of treatment effect available with propensity score methods, including the average treatment effect, the average treatment effect for the treated, and average treatment effect for the controls or untreated, and compare to the conditional treatment effect in covariate-adjusted regression. We highlight the relative advantages of the various methods and estimators, describe analysis assumptions and how to critically evaluate them, and demonstrate methods in an analysis of thoracic epidural analgesia and new-onset atrial arrhythmias after pulmonary resection.
Inverse probability weighting
Average treatment effect
Inverse probability
Cite
Citations (150)
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin’s rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin’s rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
Inverse probability weighting
Imputation (statistics)
Inverse probability
Average treatment effect
Cite
Citations (204)
We study problems with multiple missing covariates and partially observed responses. We develop a new framework to handle complex missing covariate scenarios via inverse probability weighting, regression adjustment, and a multiply-robust procedure. We apply our framework to three classical problems: the Cox model from survival analysis, missing response, and binary treatment from causal inference. We also discuss how to handle missing covariates in these scenarios, and develop associated identifying theories and asymptotic theories. We apply our procedure to simulations and an Alzheimer's disease dataset and obtain meaningful results.
Inverse probability weighting
Cite
Citations (0)
One of the primary problems facing statisticians who work with survival data is the loss of information that occurs with right-censored data. This research considers trying to recover some of this endpoint information through the use of a prognostic covariate which is measured on each individual. We begin by defining a survival estimate which uses time-dependent covariates to more precisely get at the underlying survival curves in the presence of censoring. This estimate has a smaller asymptotic variance than the usual Kaplan-Meier in the presence of censoring and reduces to the Kaplan-Meier (1958, Journal of the American Statistical Association 53, 457-481) in situations where the covariate is not prognostic or no censoring occurs. In addition, this estimate remains consistent when the incorporated covariate contains information about the censoring process as well as survival information. Because the Kaplan-Meier estimate is known to be biased in this situation due to informative censoring, we recommend use of our estimate.
Censoring (clinical trials)
Cite
Citations (66)
Censoring (clinical trials)
Baseline (sea)
Cite
Citations (1)
In propensity score analysis, the frequently used regression adjustment involves regressing the outcome on the estimated propensity score and treatment indicator. This approach can be highly efficient when model assumptions are valid, but can lead to biased results when the assumptions are violated. We extend the simple regression adjustment to a varying coefficient regression model that allows for nonlinear association between outcome and propensity score. We discuss its connection with some propensity score matching and weighting methods, and show that the proposed analytical framework can shed light on the intrinsic connection among some mainstream propensity score approaches (stratification, regression, kernel matching, and inverse probability weighting) and handle commonly used causal estimands. We derive analytic point and variance estimators that properly take into account the sampling variability in the estimated propensity score. Extensive simulations show that the proposed approach possesses desired finite sample properties and demonstrates competitive performance in comparison with other methods estimating the same causal estimand. The proposed methodology is illustrated with a study on right heart catheterization.
Inverse probability weighting
Average treatment effect
Cite
Citations (17)
Propensity score (PS) based methods, such as matching, stratification, regression adjustment, simple and augmented inverse probability weighting, are popular for controlling for observed confounders in observational studies of causal effects. More recently, we proposed penalized spline of propensity prediction (PENCOMP), which multiply-imputes outcomes for unassigned treatments using a regression model that includes a penalized spline of the estimated selection probability and other covariates. For PS methods to work reliably, there should be sufficient overlap in the propensity score distributions between treatment groups. Limited overlap can result in fewer subjects being matched or in extreme weights causing numerical instability and bias in causal estimation. The problem of limited overlap suggests (a) defining alternative estimands that restrict inferences to subpopulations where all treatments have the potential to be assigned, and (b) excluding or down-weighting sample cases where the propensity to receive one of the compared treatments is close to zero. We compared PENCOMP and other PS methods for estimation of alternative causal estimands when limited overlap occurs. Simulations suggest that, when there are extreme weights, PENCOMP tends to outperform the weighted estimators for ATE and performs similarly to the weighted estimators for alternative estimands. We illustrate PENCOMP in two applications: the effect of antiretroviral treatments on CD4 counts using the Multicenter AIDS cohort study (MACS) and whether right heart catheterization (RHC) is a beneficial treatment in treating critically ill patients.
Inverse probability
Inverse probability weighting
Marginal structural model
Average treatment effect
Cite
Citations (1)