Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Background Evidence to guide type 2 diabetes treatment individualization is limited. We evaluated heterogeneous treatment effects (HTE) of intensive glycemic control in type 2 diabetes patients on major adverse cardiovascular events (MACE) in the Action to Control Cardiovascular Risk in Diabetes Study (ACCORD) and the Veterans Affairs Diabetes Trial (VADT). Methods Causal forests machine learning analysis was performed using pooled individual data from two randomized trials (n = 12,042) to identify HTE of intensive versus standard glycemic control on MACE in patients with type 2 diabetes. We used variable prioritization from causal forests to build a summary decision tree and examined the risk difference of MACE between treatment arms in the resulting subgroups. Results A summary decision tree used five variables (hemoglobin glycation index, estimated glomerular filtration rate, fasting glucose, age, and body mass index) to define eight subgroups in which risk differences of MACE ranged from − 5.1% (95% CI − 8.7, − 1.5) to 3.1% (95% CI 0.2, 6.0) (negative values represent lower MACE associated with intensive glycemic control). Intensive glycemic control was associated with lower MACE in pooled study data in subgroups with low (− 4.2% [95% CI − 8.1, − 1.0]), intermediate (− 5.1% [95% CI − 8.7, − 1.5]), and high (− 4.3% [95% CI − 7.7, − 1.0]) MACE rates with consistent directions of effect in ACCORD and VADT alone. Conclusions This data-driven analysis provides evidence supporting the diabetes treatment guideline recommendation of intensive glucose lowering in diabetes patients with low cardiovascular risk and additionally suggests potential benefits of intensive glycemic control in some individuals at higher cardiovascular risk.more » « less
-
A common goal in observational research is to estimate marginal causal effects in the presence of confounding variables. One solution to this problem is to use the covariate distribution to weight the outcomes such that the data appear randomized. The propensity score is a natural quantity that arises in this setting. Propensity score weights have desirable asymptotic properties, but they often fail to adequately balance covariate data in finite samples. Empirical covariate balancing methods pose as an appealing alternative by exactly balancing the sample moments of the covariate distribution. With this objective in mind, we propose a framework for estimating balancing weights by solving a constrained convex program, where the criterion function to be optimized is a Bregman distance. We then show that the different distances in this class render identical weights to those of other covariate balancing methods. A series of numerical studies are presented to demonstrate these similarities.more » « less
-
Two important considerations in clinical research studies are proper evaluations of internal and external validity. While randomized clinical trials can overcome several threats to internal validity, they may be prone to poor external validity. Conversely, large prospective observational studies sampled from a broadly generalizable population may be externally valid, yet susceptible to threats to internal validity, particularly confounding. Thus, methods that address confounding and enhance transportability of study results across populations are essential for internally and externally valid causal inference, respectively. These issues persist for another problem closely related to transportability known as data‐fusion. We develop a calibration method to generate balancing weights that address confounding and sampling bias, thereby enabling valid estimation of the target population average treatment effect. We compare the calibration approach to two additional doubly robust methods that estimate the effect of an intervention on an outcome within a second, possibly unrelated target population. The proposed methodologies can be extended to resolve data‐fusion problems that seek to evaluate the effects of an intervention using data from two related studies sampled from different populations. A simulation study is conducted to demonstrate the advantages and similarities of the different techniques. We also test the performance of the calibration approach in a motivating real data example comparing whether the effect of biguanides vs sulfonylureas—the two most common oral diabetes medication classes for initial treatment—on all‐cause mortality described in a historical cohort applies to a contemporary cohort of US Veterans with diabetes.
-
We show how entropy balancing can be used for transporting experimental treatment effects from a trial population onto a target population. This method is doubly robust in the sense that if either the outcome model or the probability of trial participation is correctly specified, then the estimate of the target population average treatment effect is consistent. Furthermore, we only require the sample moments of the effect modifiers drawn from the target population to consistently estimate the target population average treatment effect. We compared the finite‐sample performance of entropy balancing with several alternative methods for transporting treatment effects between populations. Entropy balancing techniques are efficient and robust to violations of model misspecification. We also examine the results of our proposed method in an applied analysis of the Action to Control Cardiovascular Risk in Diabetes Blood Pressure trial transported to a sample of US adults with diabetes taken from the National Health and Nutrition Examination Survey cohort.