Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Summary We develop a stochastic epidemic model progressing over dynamic networks, where infection rates are heterogeneous and may vary with individual-level covariates. The joint dynamics are modeled as a continuous-time Markov chain such that disease transmission is constrained by the contact network structure, and network evolution is in turn influenced by individual disease statuses. To accommodate partial epidemic observations commonly seen in real-world data, we propose a stochastic EM algorithm for inference, introducing key innovations that include efficient conditional samplers for imputing missing infection and recovery times which respect the dynamic contact network. Experiments on both synthetic and real datasets demonstrate that our inference method can accurately and efficiently recover model parameters and provide valuable insight at the presence of unobserved disease episodes in epidemic data.more » « lessFree, publicly-accessible full text available December 31, 2025
-
Summary In many observational studies, the treatment assignment mechanism is not individualistic, as it allows the probability of treatment of a unit to depend on quantities beyond the unit’s covariates. In such settings, unit treatments may be entangled in complex ways. In this article, we consider a particular instance of this problem where the treatments are entangled by a social network among units. For instance, when studying the effects of peer interaction on a social media platform, the treatment on a unit depends on the change of the interactions network over time. A similar situation is encountered in many economic studies, such as those examining the effects of bilateral trade partnerships on countries’ economic growth. The challenge in these settings is that individual treatments depend on a global network that may change in a way that is endogenous and cannot be manipulated experimentally. In this paper, we show that classical propensity score methods that ignore entanglement may lead to large bias and wrong inference of causal effects. We then propose a solution that involves calculating propensity scores by marginalizing over the network change. Under an appropriate ignorability assumption, this leads to unbiased estimates of the treatment effect of interest. We also develop a randomization-based inference procedure that takes entanglement into account. Under general conditions on network change, this procedure can deliver valid inference without explicitly modelling the network. We establish theoretical results for the proposed methods and illustrate their behaviour via simulation studies based on real-world network data. We also revisit a large-scale observational dataset on contagion of online user behaviour, showing that ignoring entanglement may inflate estimates of peer influence.more » « lessFree, publicly-accessible full text available January 1, 2026
-
Abstract ObjectivesEpileptiform activity (EA) worsens outcomes in patients with acute brain injuries (e.g., aneurysmal subarachnoid hemorrhage [aSAH]). Randomized trials (RCTs) assessing anti-seizure interventions are needed. Due to scant drug efficacy data and ethical reservations with placebo utilization, RCTs are lacking or hindered by design constraints. We used a pharmacological model-guided simulator to design and determine feasibility of RCTs evaluating EA treatment. MethodsIn a single-center cohort of adults (age >18) with aSAH and EA, we employed a mechanistic pharmacokinetic-pharmacodynamic framework to model treatment response using observational data. We subsequently simulated RCTs for levetiracetam and propofol, each with three treatment arms mirroring clinical practice and an additional placebo arm. Using our framework we simulated EA trajectories across treatment arms. We predicted discharge modified Rankin Scale as a function of baseline covariates, EA burden, and drug doses using a double machine learning model learned from observational data. Differences in outcomes across arms were used to estimate the required sample size. ResultsSample sizes ranged from 500 for levetiracetam 7 mg/kg vs placebo, to >4000 for levetiracetam 15 vs. 7 mg/kg to achieve 80% power (5% type I error). For propofol 1mg/kg/hr vs. placebo 1200 participants were needed. Simulations comparing propofol at varying doses did not reach 80% power even at samples >1200. InterpretationOur simulations using drug efficacy show sample sizes are infeasible, even for potentially unethical placebo-control trials. We highlight the strength of simulations with observational data to inform the null hypotheses and assess feasibility of future trials of EA treatment.more » « less
-
Abstract With historic misses in the 2016 and 2020 US Presidential elections, interest in measuring polling errors has increased. The most common method for measuring directional errors and non-sampling excess variability during a postmortem for an election is by assessing the difference between the poll result and election result for polls conducted within a few days of the day of the election. Analysing such polling error data is notoriously difficult with typical models being extremely sensitive to the time between the poll and the election. We leverage hidden Markov models traditionally used for election forecasting to flexibly capture time-varying preferences and treat the election result as a peek at the typically hidden Markovian process. Our results are much less sensitive to the choice of time window, avoid conflating shifting preferences with polling error, and are more interpretable despite a highly flexible model. We demonstrate these results with data on polls from the 2004 through 2020 US Presidential elections and 1992 through 2020 US Senate elections, concluding that previously reported estimates of bias in Presidential elections were too extreme by 10%, estimated bias in Senatorial elections was too extreme by 25%, and excess variability estimates were also too large.more » « less
An official website of the United States government

Full Text Available