BackgroundWhen unaddressed, contamination in child maltreatment research, in which some proportion of children recruited for a nonmaltreated comparison group are exposed to maltreatment, downwardly biases the significance and magnitude of effect size estimates. This study extends previous contamination research by investigating how a dual‐measurement strategy of detecting and controlling contamination impacts causal effect size estimates of child behavior problems. MethodsThis study included 634 children from the LONGSCAN study with 63 cases of confirmed child maltreatment after age 8 and 571 cases without confirmed child maltreatment. Confirmed child maltreatment and internalizing and externalizing behaviors were recorded every 2 years between ages 4 and 16. Contamination in the nonmaltreated comparison group was identified and controlled by either a prospective self‐report assessment at ages 12, 14, and 16 or by a one‐time retrospective self‐report assessment at age 18. Synthetic control methods were used to establish causal effects and quantify the impact of contamination when it was not controlled, when it was controlled for by prospective self‐reports, and when it was controlled for by retrospective self‐reports. ResultsRates of contamination ranged from 62% to 67%. Without controlling for contamination, causal effect size estimates for internalizing behaviors were not statistically significant. Causal effects only became statistically significant after controlling contamination identified from either prospective or retrospective reports and effect sizes increased by between 17% and 54%. Controlling contamination had a smaller impact on effect size increases for externalizing behaviors but did produce a statistically significant overall effect, relative to the model ignoring contamination, when prospective methods were used. ConclusionsThe presence of contamination in a nonmaltreated comparison group can underestimate the magnitude and statistical significance of causal effect size estimates, especially when investigating internalizing behavior problems. Addressing contamination can facilitate the replication of results across studies.
more »
« less
Contamination in Observational Research on Child Maltreatment: A Conceptual and Empirical Review With Implications for Future Research
Contamination is a methodological phenomenon occurring in child maltreatment research when individuals in an established comparison condition have, in reality, been exposed to maltreatment during childhood. The current paper: (1) provides a conceptual and methodological introduction to contamination in child maltreatment research, (2) reviews the empirical literature demonstrating that the presence of contamination biases causal estimates in both prospective and retrospective cohort studies of child maltreatment effects, (3) outlines a dual measurement strategy for how child maltreatment researchers can address contamination, and (4) describes modern statistical methods for generating causal estimates in child maltreatment research after contamination is controlled. Our goal is to introduce the issue of contamination to researchers examining the effects of child maltreatment in an effort to improve the precision and replication of causal estimates that ultimately inform scientific and clinical decision-making as well as public policy.
more »
« less
- Award ID(s):
- 2041333
- PAR ID:
- 10523450
- Publisher / Repository:
- Sage
- Date Published:
- Journal Name:
- Child Maltreatment
- ISSN:
- 1077-5595
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)This article conceptualizes the collective method to describe how 12 scholars worked collaboratively to study the effects of displacement following Hurricane Katrina. The collective method is defined as an integrated, reflexive process of research design and implementation in which a diverse group of scholars studying a common phenomenon-yet working on independent projects-engage in repeated theoretical and methodological discussions to improve (1) research transparency and accountability and (2) the rigor and efficacy of each member’s unique project. This process generates critical discussions over researchers’ and respondents’ positionality, the framework of intersectionality, and applied ethics. Informed by feminist theoretical and methodological considerations of reflexivity, insider-outsider positionality, power relations, and social justice, the collective method can enhance scholars’ standpoints regarding philosophical, ethical, and strategic issues that emerge in the research process.more » « less
-
Scientists seek to understand the causal processes that generate sustainability problems and determine effective solutions. Yet, causal inquiry in nature–society systems is hampered by conceptual and methodological challenges that arise from nature–society interdependencies and the complex dynamics they create. Here, we demonstrate how sustainability scientists can address these challenges and make more robust causal claims through better integration between empirical analyses and process- or agent-based modeling. To illustrate how these different epistemological traditions can be integrated, we present four studies of air pollution regulation, natural resource management, and the spread of COVID-19. The studies show how integration can improve empirical estimates of causal effects, inform future research designs and data collection, enhance understanding of the complex dynamics that underlie observed temporal patterns, and elucidate causal mechanisms and the contexts in which they operate. These advances in causal understanding can help sustainability scientists develop better theories of phenomena where social and ecological processes are dynamically intertwined and prior causal knowledge and data are limited. The improved causal understanding also enhances governance by helping scientists and practitioners choose among potential interventions, decide when and how the timing of an intervention matters, and anticipate unexpected outcomes. Methodological integration, however, requires skills and efforts of all involved to learn how members of the respective other tradition think and analyze nature–society systems.more » « less
-
Randomized A/B tests within online learning platforms represent an exciting direction in learning sciences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treatment effects are small, A/B tests are underpowered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show that the gains can be even larger for estimating subgroup effects, hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators.more » « less
-
Randomized A/B tests within online learning platforms represent an exciting direction in learning sci- ences. With minimal assumptions, they allow causal effect estimation without confounding bias and exact statistical inference even in small samples. However, often experimental samples and/or treat- ment effects are small, A/B tests are under-powered, and effect estimates are overly imprecise. Recent methodological advances have shown that power and statistical precision can be substantially boosted by coupling design-based causal estimation to machine-learning models of rich log data from historical users who were not in the experiment. Estimates using these techniques remain unbiased and inference remains exact without any additional assumptions. This paper reviews those methods and applies them to a new dataset including over 250 randomized A/B comparisons conducted within ASSISTments, an online learning platform. We compare results across experiments using four novel deep-learning models of auxiliary data, and show that incorporating auxiliary data into causal estimates is roughly equivalent to increasing the sample size by 20% on average, or as much as 50-80% in some cases, relative to t-tests, and by about 10% on average, or as much as 30-50%, compared to cutting-edge machine learning unbiased estimates that use only data from the experiments. We show the gains can be even larger for estimating subgroup effects, that they hold even when the remnant is unrepresentative of the A/B test sample, and extend to post-stratification population effects estimators.more » « less
An official website of the United States government

