
In many clinical trials or social science studies, comparison of treatment effect between two groups is often regarded as an important part of the research. This effect can be measured by using difference or ratio between each treatment group, and average causal effect (ACE) is often of primary interest since individual treatment effect is difficult to obtain especially when the study is not randomized. Under conventional randomized experiments, the treatment effect can be easily evaluated, since all factors except the treatment group (confounding factor) can be controlled by experimenters. In an observational study, however, many confounding factors are present and they may distort the causal effect severely, because the distribution of covariates among subjects between treatments can be different (Rubin, 1978).
In such cases, the inverse probability weighting (IPW) method can be used to balance the propensity score distributions between two treatment groups by dividing the observed outcome with the estimated propensity scores (Austin, 2011). The resulting estimator may approximate the effect as if it were obtained from a randomized experiment, but it requires a well-posited propensity score (PS) model for stable analysis (Rosenbaum and Rubin, 1983). However, problems due to model misspecification often arise in real applications, since the true propensity model is hard to know exactly in an observational study. Alternatively, the augmented inverse probability weighting estimator (AIPWE), a hybrid approach of regression and IPW method, has been extensively studied as a generalization of the IPW method (Robins
However, AIPWE could be worse than a single IPW method when both models are incorrectly posited (Kang and Schafer, 2007). There have been many studies to overcome the limitation of this DR estimator. In this case, machine learning techniques are a natural choice to accommodate potential model misspecification for causal regression modeling (Lee
Despite various proposals for improving causal estimators, researchers may suffer from inadequate performance comparisons between these methods. In this article, we present an empirical study to compare several recently developed causal inference models for estimating ACE. We briefly overview recent developments in causal analysis, including regression-based estimator, IPWE, AIPWE, TMLE and CF and explore which methods perform better under various situations via simulation experiments. Our simulation study covers a wide range of scenarios that may change ACE and investigates existing and newly adapted machine learning models. We replace the regression with generalized smoothing model (GSM) and IPW models with several machine learning models with correct, mis-specified and omitted models, and show how to have lower bias and average error rate in a wide range of scenarios.
The rest of the article is organized as follows. Section 2 presents basic notation and underlying assumptions and summarizes various recent causal effect estimating methods. Section 3 provides extensive simulation results that compare performance improvements with different combinations of the resulting regression and propensity score models. In Section 4, several causal models are applied to the percutaneous coronary intervention (PCI) data for illustration. Section 5 provides a short explanation and discussion of our results.
In causal inference, we are often interested in clarifying the causal relationship in observational data. However, to allow for a precise presentation of the causal treatment estimators, we need additional notation and assumptions for the observed data (Hernán and Robins, 2020). For
which measures average treatment difference between
To quantify the causal effect in observational studies, we shall make the following three common assumptions in causal analysis (Hernán and Robins, 2020),
(i)
(ii)
(iii)
The consistency assumption states that there are no multiple versions of treatment outputs, which means that the mechanism used to assign the treatments does not matter and assigning the treatments in a different way does not constitute a different treatment. The exchangeability assumption says the set of observed pretreatment covariates is sufficiently rich, such that there exist no unmeasured or unknown confounders, not included in
The commonly used randomized experimental designs allow researchers to estimate the average causal effect estimates closest to the theory, in which we may consider
Unlike randomized experimental data, however, the treatment group is unevenly distributed or associated with underlying confounders in observational data, making it difficult to use
The estimation of treatment effects may proceed in two stages by modeling the conditional outcome , a regression-based causal effect estimate can be obtained by
where
In our simulations, we primarily employ a generalized smoothing model (GSM) by Helwig (2020), which estimates parameters by estimating the unknown smooth function under a multiple and generalized nonparametric regression model from the sample of data. The GSM algorithm is available in the R package,
On the other hand, the inverse probability weight estimator (IPWE) can be described as a difference of weighted averages that assigns inverse probability weights to each treatment group. We can estimate the propensity score
Conceptually, IPWE attempts to fully adjust for measured confounders by balancing the confounders across levels of treatment with treatment weight. When a function for the propensity score is unknown, logistic and multinomial regression models are commonly used for binary and multiple treatments, respectively.
The propensity scores, when properly estimated, ensure that covariates are assigned proportionally across two treatment groups, and may control very distant outliers. In this case, machine learning methods, such as boosted ensemble models, have been shown to yield causal effect estimates with many desirable properties (Lee
An alternative and improved estimator is the augmented inverse probability weighted estimator (AIPWE) that combines the properties of both regression-based estimator and IPWE (Robins
where
Notice that the first term in
TMLE (van der Laan and Rose, 2011; Schuler and Rose, 2017) is another well-established doubly-robust maximum likelihood method for causal effect estimation. Like AIPWE, the TMLE procedure should also specify both the outcome and propensity models. It begins by estimating the conditional mean models given the treatment and covariates,
Then, we generate “targeted” estimates of the set of potential outcomes, incorporating the clever covariate as an adjustment to reduce the bias. Specifically, by treating the initial estimate
to obtain the estimate
where
TMLE is a two-step semiparametric procedure that solves the efficient influence curve estimating equation and thereby yields an efficient or at least locally efficient solution to the parameter of interest. In addition, combined with the super-learner (SL) algorithm (van der Laan
Causal forest (CF) is a causal inference learning method as an extension of Breiman (2001)’s random forest (Wager and Athey, 2018; Athey
Fundamental idea of CF is constructing partitions of covariate act as sample came from randomized trial, which can be implemented as follows. Suppose there exist
Then, the corresponding ACE estimate with causal tree can be obtained by
The CF algorithm is available in the R package,
Unlike conventional tree algorithms, CF algorithm is not based on minimizing the mean squared error in a regression tree. Since the true
In this section, we provide empirical simulation results to evaluate the finite-sample performance of several causal inference methods under a wide range of scenarios. For each outcome model (
Our simulation involves three covariates,
where
In the omitted model case, we assume that
In this setting, we seek to find which of the proposed estimators provide higher robustness to model disturbances under nine scenarios, where both or only one of outcome model and IPW model are accurately or incorrectly specified or omitted. For ease of presentation, we consider two outcome models, linear regression (Reg) and GSM, with different specifications in outcome model (
(i) improved predictive models by indirect classification and bagging for classification, regression and survival problems as well as resampling based estimators of prediction error (
Tables 1 and 2 show the simulation results of IPWE, AIPWE and TMLE estimators in the correct, incorrect and omitted model specification of the outcome and IPW models, respectively, to estimate the average causal effect estimator (ACE). Simulation results in Tables 1 and 2 are based on the previously described model specifications with
Table 3 summarizes the results of CF, TMLE-SL and AIPWE-SL estimators. Overall, CF provides a great performance without requiring any complex model specifications. However, when both outcome and propensity models are correctly specified, CF seems to slightly underperform TMLE-SL and AIPWE-SL with respect to both bias and MSE. With small samples, CF has a relatively large MSE, but as the sample size gradually increases, the difference gradually decreases and in some case, CF shows a better MSE performance. However, we note that the simulation scenarios in Table 3 is not comprehensive and care should be taken for generalizing the previous findings.
Finally, Figure 1 displays eleven ACE estimators when both the outcome and propensity models are misspecified. See the note of Figure 1 for model description: for example, “model 4: TMLE-Reg-SL” denotes the composite TMLE method by implementing Reg for the outcome model and SL for the propensity model. Notice that CF (model 1) performs the best in this case, while two IPWEs (model 2 and 3) clearly underperform the other competitors due to model misspecification. Although AIPWE and TMLE work well without much difference, their performance depends on propensity model specifications rather than outcome models. For example, AIPWE-Reg-SL (model 5) outperforms AIPWE-Reg-GLM (model 7), while the difference between AIPWE-Reg-SL (model 5) and AIPWE-GSM-SL (model 8) is not much noticeable. On the other hand, by comparing IPWE-SL (model 2) and AIPWE-Reg-SL (model 5), we can see that doubly-robust estimation helps reduce both bias and variance even though model misspecification is present. We can conclude that the use of flexible machine learning methods, in particular for propensity models, is beneficial in reducing potential biases and achieving more robust results to model misspecification.
As an illustration, we consider the data from a percutaneous coronary intervention (PCI) observational study with 996 patients at Ohio Heart Health, Christ Hospital, Cincinnati, in 1997 (Li and Shen, 2020). In the study, researchers continued to support Lindner Center for at least six months using two treatment methods,
We begin by estimating propensity scores using logistic, Tree, GBM, and SL algorithms, including seven clinical predictors. Figure 2 displays the propensity score distributions for
The results in Table 4 imply similar conclusions among all causal estimators (except for the regression estimator): patients with
Doubly robustness (DR) is an important component in causal inference to correctly infer a target estimand after adjusting for potential confounders (Tsiatis, 2007). In observational studies, DR methods play an important role in estimating causal estimates if one of two working models (
In this paper, we tried to compare several methods as much as possible under different simulation configurations, but our results are limited in that we cannot cover all possible scenarios. Therefore, care should be taken not to overgeneralize our findings. Main implication from this paper is that DR estimators, such as AIPWE and TMLE, with support from machine learning algorithms work well even when both models are not perfect. Their performance seem quite similar and it is hard to say that one is superior to the other in our setting. CF works very reliably and shows relatively more consistenet results when there is a highly nonlinear pattern in the data. However, these causal estimators cannot completely remove biases especially when informative covariates are omitted in the model. Hence, investigators should more focus on study design to get more information about the target variable as much as possible. Finally, in our experiments, it is assumed that the outcome is completely observed. However, the outcome is often masked due to censoring in many observational studies. It is interesting to explore causal machine learning algorithms and measure their performance in survival analysis (Choi
Summary statistics of simulation results for
Method | Model | Q | GLM | Tree | GBM | SL | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
Bias | MSE | Bias | MSE | Bias | MSE | Bias | MSE | ||||
TMLE | GSM | 0.002 | 0.026 | 0.005 | 0.023 | 0.005 | 0.023 | 0.005 | 0.020 | ||
Reg | −0.008 | 0.271 | −0.006 | 0.053 | −0.004 | 0.062 | −0.001 | 0.039 | |||
GSM | 0.001 | 0.061 | 0.002 | 0.024 | 0.004 | 0.020 | 0.003 | 0.020 | |||
Reg | 0.018 | 0.169 | −0.002 | 0.051 | 0.001 | 0.042 | 0.000 | 0.041 | |||
GSM | 0.003 | 0.025 | 0.005 | 0.022 | 0.005 | 0.023 | 0.004 | 0.020 | |||
Reg | −0.006 | 0.265 | −0.006 | 0.053 | −0.004 | 0.066 | −0.003 | 0.040 | |||
GSM | −0.095 | 0.142 | 0.011 | 0.092 | −0.021 | 0.071 | 0.084 | 0.067 | |||
Reg | −0.130 | 0.178 | 0.025 | 0.099 | −0.022 | 0.078 | 0.113 | 0.070 | |||
GSM | 0.545 | 0.616 | 0.259 | 0.151 | 0.313 | 0.166 | 0.268 | 0.135 | |||
Reg | 0.689 | 0.847 | 0.295 | 0.177 | 0.348 | 0.189 | 0.314 | 0.160 | |||
GSM | −0.094 | 0.173 | 0.008 | 0.095 | −0.036 | 0.112 | 0.050 | 0.084 | |||
Reg | −0.129 | 0.214 | 0.019 | 0.099 | −0.043 | 0.125 | 0.068 | 0.084 | |||
GSM | 0.204 | 0.098 | 0.160 | 0.093 | 0.193 | 0.083 | 0.133 | 0.060 | |||
Reg | 0.209 | 0.337 | 0.122 | 0.104 | 0.169 | 0.105 | 0.097 | 0.064 | |||
GSM | −0.347 | 0.519 | −0.001 | 0.088 | 0.005 | 0.057 | 0.054 | 0.051 | |||
Reg | −0.363 | 0.624 | −0.037 | 0.114 | −0.032 | 0.079 | 0.016 | 0.067 | |||
GSM | 0.204 | 0.121 | 0.186 | 0.103 | 0.196 | 0.109 | 0.188 | 0.097 | |||
Reg | 0.212 | 0.356 | 0.149 | 0.111 | 0.176 | 0.133 | 0.153 | 0.095 | |||
AIPWE | GSM | 0.001 | 0.027 | 0.004 | 0.023 | 0.003 | 0.022 | 0.003 | 0.020 | ||
Reg | −0.006 | 0.321 | 0.001 | 0.054 | 0.002 | 0.063 | 0.005 | 0.040 | |||
GSM | 0.005 | 0.147 | 0.001 | 0.024 | 0.002 | 0.020 | 0.002 | 0.020 | |||
Reg | 0.051 | 0.533 | 0.001 | 0.052 | 0.002 | 0.042 | 0.003 | 0.041 | |||
GSM | 0.001 | 0.027 | 0.002 | 0.068 | 0.004 | 0.023 | 0.003 | 0.020 | |||
Reg | −0.004 | 0.314 | 0.002 | 0.053 | 0.002 | 0.068 | 0.004 | 0.040 | |||
GSM | −0.236 | 0.281 | −0.126 | 0.191 | −0.133 | 0.167 | −0.020 | 0.146 | |||
Reg | −0.227 | 0.273 | −0.065 | 0.135 | −0.081 | 0.114 | 0.066 | 0.092 | |||
GSM | 0.513 | 1.134 | 0.131 | 0.186 | 0.188 | 0.185 | 0.149 | 0.171 | |||
Reg | 0.733 | 1.579 | 0.222 | 0.171 | 0.280 | 0.176 | 0.258 | 0.158 | |||
GSM | −0.234 | 0.315 | −0.131 | 0.198 | −0.152 | 0.215 | −0.058 | 0.171 | |||
Reg | −0.225 | 0.312 | −0.073 | 0.139 | −0.106 | 0.167 | 0.015 | 0.110 | |||
GSM | 0.203 | 0.106 | 0.159 | 0.097 | 0.193 | 0.083 | 0.140 | 0.062 | |||
Reg | 0.203 | 0.387 | 0.126 | 0.106 | 0.169 | 0.105 | 0.106 | 0.066 | |||
GSM | −0.414 | 1.493 | −0.004 | 0.094 | 0.015 | 0.057 | 0.073 | 0.054 | |||
Reg | −0.427 | 1.966 | −0.044 | 0.119 | −0.029 | 0.079 | 0.031 | 0.068 | |||
GSM | 0.204 | 0.126 | 0.185 | 0.107 | 0.195 | 0.109 | 0.188 | 0.096 | |||
Reg | 0.206 | 0.404 | 0.153 | 0.113 | 0.175 | 0.133 | 0.156 | 0.096 | |||
IPWE | 0.209 | 0.717 | 0.544 | 0.395 | 0.249 | 0.243 | 0.313 | 0.182 | |||
1.249 | 6.745 | 0.424 | 0.296 | 0.365 | 0.254 | 0.295 | 0.173 | ||||
0.213 | 0.713 | 0.590 | 0.445 | 0.238 | 0.264 | 0.395 | 0.249 |
Summary statistics of simulation results for
Method | Model | Q | GLM | Tree | GBM | SL | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
Bias | MSE | Bias | MSE | Bias | MSE | Bias | MSE | ||||
TMLE | GSM | −0.007 | 0.044 | −0.004 | 0.033 | −0.004 | 0.031 | −0.003 | 0.028 | ||
Reg | −0.052 | 1.566 | −0.022 | 0.077 | −0.017 | 0.090 | −0.006 | 0.055 | |||
GSM | −0.005 | 0.042 | −0.003 | 0.034 | −0.003 | 0.027 | −0.004 | 0.026 | |||
Reg | 0.005 | 0.126 | −0.003 | 0.071 | 0.001 | 0.053 | 0.002 | 0.053 | |||
GSM | −0.007 | 0.044 | −0.005 | 0.029 | −0.004 | 0.032 | −0.005 | 0.027 | |||
Reg | −0.051 | 1.536 | −0.023 | 0.072 | −0.017 | 0.092 | −0.014 | 0.055 | |||
GSM | −0.480 | 0.625 | −0.036 | 0.139 | −0.211 | 0.146 | 0.033 | 0.069 | |||
Reg | −0.547 | 0.711 | −0.028 | 0.133 | −0.219 | 0.148 | −0.146 | 0.126 | |||
GSM | 0.505 | 0.391 | 0.360 | 0.246 | 0.362 | 0.202 | 0.317 | 0.170 | |||
Reg | 0.637 | 0.626 | 0.389 | 0.284 | 0.390 | 0.219 | 0.309 | 0.171 | |||
GSM | −0.474 | 0.640 | −0.127 | 0.118 | −0.241 | 0.190 | −0.084 | 0.096 | |||
Reg | −0.542 | 0.727 | −0.126 | 0.117 | −0.254 | 0.193 | −0.186 | 0.127 | |||
GSM | 0.495 | 0.357 | 0.342 | 0.226 | 0.439 | 0.253 | 0.312 | 0.152 | |||
Reg | 0.619 | 2.353 | 0.241 | 0.178 | 0.390 | 0.256 | 0.082 | 0.108 | |||
GSM | −0.299 | 0.320 | −0.060 | 0.157 | 0.109 | 0.077 | 0.186 | 0.098 | |||
Reg | −0.368 | 0.425 | −0.140 | 0.197 | 0.024 | 0.087 | −0.061 | 0.099 | |||
GSM | 0.497 | 0.375 | 0.436 | 0.266 | 0.446 | 0.278 | 0.443 | 0.266 | |||
Reg | 0.622 | 2.353 | 0.350 | 0.226 | 0.403 | 0.283 | 0.351 | 0.216 | |||
AIPWE | GSM | −0.008 | 0.059 | −0.004 | 0.033 | −0.005 | 0.030 | −0.004 | 0.027 | ||
Reg | 0.000 | 3.096 | −0.008 | 0.077 | −0.006 | 0.091 | −0.003 | 0.056 | |||
GSM | −0.002 | 0.083 | −0.004 | 0.034 | −0.005 | 0.026 | −0.005 | 0.026 | |||
Reg | 0.023 | 0.362 | −0.006 | 0.074 | −0.004 | 0.054 | −0.003 | 0.054 | |||
GSM | −0.007 | 0.059 | −0.006 | 0.029 | −0.005 | 0.031 | −0.005 | 0.027 | |||
Reg | 0.001 | 2.989 | −0.005 | 0.072 | −0.007 | 0.094 | −0.003 | 0.056 | |||
GSM | −0.662 | 1.395 | −0.114 | 0.177 | −0.256 | 0.199 | 0.026 | 0.088 | |||
Reg | −0.758 | 1.573 | −0.107 | 0.162 | −0.264 | 0.195 | 0.059 | 0.072 | |||
GSM | 0.494 | 0.555 | 0.310 | 0.233 | 0.312 | 0.190 | 0.301 | 0.166 | |||
Reg | 0.671 | 1.015 | 0.350 | 0.264 | 0.355 | 0.201 | 0.329 | 0.180 | |||
GSM | −0.652 | 1.379 | −0.217 | 0.174 | −0.294 | 0.253 | −0.112 | 0.123 | |||
Reg | −0.750 | 1.556 | −0.221 | 0.164 | −0.309 | 0.252 | −0.103 | 0.102 | |||
GSM | 0.503 | 0.428 | 0.344 | 0.229 | 0.442 | 0.261 | 0.336 | 0.168 | |||
Reg | 0.623 | 3.915 | 0.246 | 0.184 | 0.387 | 0.257 | 0.245 | 0.133 | |||
GSM | −0.355 | 0.785 | −0.074 | 0.169 | 0.132 | 0.087 | 0.228 | 0.115 | |||
Reg | −0.460 | 1.182 | −0.184 | 0.232 | 0.025 | 0.089 | 0.127 | 0.102 | |||
GSM | 0.505 | 0.444 | 0.436 | 0.267 | 0.448 | 0.283 | 0.444 | 0.266 | |||
Reg | 0.627 | 3.842 | 0.356 | 0.232 | 0.399 | 0.282 | 0.368 | 0.221 | |||
IPWE | 0.670 | 5.469 | 0.821 | 0.852 | 0.418 | 0.405 | 0.281 | 0.198 | |||
0.970 | 4.260 | 0.567 | 0.551 | 0.457 | 0.321 | 0.350 | 0.221 | ||||
0.672 | 5.256 | 0.938 | 1.000 | 0.417 | 0.403 | 0.576 | 0.436 |
Simulation results of regression model with GLM (REG+), causal forest (CF), TMLE-SL and AIPWE-SL. Bias and MSE (in parentheses) are calculated, based on the sample size
Model | Method | ||||||
---|---|---|---|---|---|---|---|
REG+ | CF | TMLE | AIPWE | ||||
250 | 0.009 (0.105) | 0.251 (0.120) | 0.013 (0.041) | ||||
−0.020 (0.161) | 0.252 (0.155) | −0.021 (2.530) | |||||
0.167 (0.148) | 0.226 (0.162) | 0.151 (0.113) | |||||
500 | 0.002 (0.053) | 0.071 (0.031) | 0.005 (0.020) | ||||
−0.034 (0.079) | 0.146 (0.069) | −0.020 (0.146) | |||||
0.154 (0.084) | 0.157 (0.083) | 0.140 (0.062) | |||||
1000 | 0.003 (0.027) | 0.042 (0.013) | |||||
0.131 (0.040) | 0.079 (0.038) | 0.038 (0.093) | |||||
0.156 (0.056) | 0.164 (0.055) | 0.139 (0.068) | |||||
250 | 0.005 (0.133) | 0.268 (0.137) | 0.009 (0.055) | ||||
−0.222 (0.210) | 0.240 (0.154) | 0.019 (0.252) | |||||
0.372 (0.306) | 0.427 (0.309) | 0.342 (0.226) | |||||
500 | −0.006 (0.073) | −0.046 (0.034) | −0.004 (0.027) | ||||
−0.247 (0.132) | 0.033 (0.069) | 0.026 (0.088) | |||||
0.362 (0.222) | 0.374 (0.206) | 0.336 (0.168) | |||||
1000 | 0.001 (0.036) | −0.067 (0.019) | 0.006 (0.014) | ||||
−0.260 (0.101) | 0.015 (0.036) | 0.032 (0.045) | |||||
0.368 (0.181) | 0.391 (0.182) | 0.344 (0.143) |
Summary results of PCI data analysis from regression (Reg), IPWE, AIPWE, TMLE and CF methods
Method | Model | Outcome | ACE | Bootstrap SE | Wald 95% CI | |
---|---|---|---|---|---|---|
Reg | Reg | mortality | −0.050 | 0.002 | (−0.054, −0.046) | |
cardbill | 981.579 | 96.361 | (792.712, 1170.446) | |||
GSM | mortality | −0.051 | 0.002 | (−0.055, −0.046) | ||
cardbill | 1052.146 | 199.176 | (661.761, 1442.531) | |||
IPWE | GLM | mortality | −0.067 | 0.032 | (−0.130, −0.003) | |
cardbill | −27.852 | 1980.224 | (−3909.092, 3853.388) | |||
Tree | mortality | −0.033 | 0.015 | (−0.063, −0.004) | ||
cardbill | 1537.173 | 1411.401 | (−1229.174, 4303.520) | |||
GBM | mortality | −0.049 | 0.023 | (−0.093, −0.005) | ||
cardbill | 1690.826 | 1556.244 | (−1359.413, 4741.065) | |||
SL | mortality | −0.032 | 0.014 | (−0.059, −0.005) | ||
cardbill | 1844.531 | 1359.656 | (−820.394, 4509.456) | |||
AIPWE | Reg | GLM | mortality | −0.065 | 0.027 | (−0.118, −0.012) |
cardbill | 251.458 | 1123.174 | (−1949.963, 2452.879) | |||
Tree | mortality | −0.042 | 0.014 | (−0.070, 0.013) | ||
cardbill | 1262.021 | 829.256 | (−363.320, 2887.363) | |||
GBM | mortality | −0.056 | 0.020 | (−0.095, −0.018) | ||
cardbill | 689.443 | 912.893 | (−1099.827, 2478.713) | |||
SL | mortality | −0.045 | 0.013 | (−0.070, −0.019) | ||
cardbill | 983.534 | 824.881 | (−633.233, 2600.301) | |||
GSM | GLM | mortality | −0.058 | 0.023 | (−0.104, −0.012) | |
cardbill | 275.740 | 991.980 | (−1668.540, 2220.021) | |||
Tree | mortality | −0.044 | 0.013 | (−0.071, −0.018) | ||
cardbill | 844.911 | 823.143 | (−768.448, 2458.271) | |||
GBM | mortality | −0.053 | 0.017 | (−0.086, −0.020) | ||
cardbill | 481.583 | 865.130 | (−1214.071, 2177.237) | |||
SL | mortality | −0.047 | 0.012 | (−0.070, −0.023) | ||
cardbill | 640.864 | 819.759 | (−965.863, 2247.591) | |||
TMLE | Reg | GLM | mortality | −0.071 | 0.029 | (−0.128, −0.013) |
cardbill | 12.931 | 1226.643 | (−2391.289, 2417.150) | |||
Tree | mortality | −0.043 | 0.015 | (−0.072, −0.014) | ||
cardbill | 1195.431 | 848.000 | (−466.648, 2857.511) | |||
GBM | mortality | −0.061 | 0.021 | (−0.102, −0.020) | ||
cardbill | 546.337 | 994.494 | (−1402.870, 2495.545) | |||
SL | mortality | −0.045 | 0.013 | (−0.071, −0.019) | ||
cardbill | 968.201 | 853.793 | (−705.232, 2641.635) | |||
GSM | GLM | mortality | −0.064 | 0.025 | (−0.112, −0.015) | |
cardbill | 612.648 | 936.713 | (−1223.310, 2448.606) | |||
Tree | mortality | −0.043 | 0.013 | (−0.069, −0.017) | ||
cardbill | 1076.901 | 810.527 | (−511.731, 2665.534 ) | |||
GBM | mortality | −0.056 | 0.017 | (−0.090, −0.023 ) | ||
cardbill | 745.969 | 837.604 | (−895.734, 2387.673 ) | |||
SL | mortality | −0.046 | 0.012 | (−0.070, −0.022) | ||
cardbill | 891.970 | 811.203 | (−697.988, 2481.927 ) | |||
CF | mortality | −0.040 | 0.001 | (−0.042, −0.038) | ||
cardbill | 993.260 | 60.931 | (−873.836, 1112.684 ) |