skip to main content


Title: A study of disproportionately affected populations by race/ethnicity during the SARS-CoV-2 pandemic using multi-population SEIR modeling and ensemble data assimilation

The disparity in the impact of COVID-19 on minority populations in the United States has been well established in the available data on deaths, case counts, and adverse outcomes. However, critical metrics used by public health officials and epidemiologists, such as a time dependent viral reproductive number (\begin{document}$ R_t $\end{document}), can be hard to calculate from this data especially for individual populations. Furthermore, disparities in the availability of testing, record keeping infrastructure, or government funding in disadvantaged populations can produce incomplete data sets. In this work, we apply ensemble data assimilation techniques which optimally combine model and data to produce a more complete data set providing better estimates of the critical metrics used by public health officials and epidemiologists. We employ a multi-population SEIR (Susceptible, Exposed, Infected and Recovered) model with a time dependent reproductive number and age stratified contact rate matrix for each population. We assimilate the daily death data for populations separated by ethnic/racial groupings using a technique called Ensemble Smoothing with Multiple Data Assimilation (ESMDA) to estimate model parameters and produce an \begin{document}$R_t(n)$\end{document} for the \begin{document}$n^{th}$\end{document} population. We do this with three distinct approaches, (1) using the same contact matrices and prior \begin{document}$R_t(n)$\end{document} for each population, (2) assigning contact matrices with increased contact rates for working age and older adults to populations experiencing disparity and (3) as in (2) but with a time-continuous update to \begin{document}$R_t(n)$\end{document}. We make a study of 9 U.S. states and the District of Columbia providing a complete time series of the pandemic in each and, in some cases, identifying disparities not otherwise evident in the aggregate statistics.

 
more » « less
Award ID(s):
1722578
NSF-PAR ID:
10297733
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Foundations of Data Science
Volume:
3
Issue:
3
ISSN:
2639-8001
Page Range / eLocation ID:
479
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the well-known Lieb-Liniger (LL) model for \begin{document}$ N $\end{document} bosons interacting pairwise on the line via the \begin{document}$ \delta $\end{document} potential in the mean-field scaling regime. Assuming suitable asymptotic factorization of the initial wave functions and convergence of the microscopic energy per particle, we show that the time-dependent reduced density matrices of the system converge in trace norm to the pure states given by the solution to the one-dimensional cubic nonlinear Schrödinger equation (NLS) with an explict rate of convergence. In contrast to previous work [3] relying on the formalism of second quantization and coherent states and without an explicit rate, our proof is based on the counting method of Pickl [65,66,67] and Knowles and Pickl [44]. To overcome difficulties stemming from the singularity of the \begin{document}$ \delta $\end{document} potential, we introduce a new short-range approximation argument that exploits the Hölder continuity of the \begin{document}$ N $\end{document}-body wave function in a single particle variable. By further exploiting the \begin{document}$ L^2 $\end{document}-subcritical well-posedness theory for the 1D cubic NLS, we can prove mean-field convergence when the limiting solution to the NLS has finite mass, but only for a very special class of \begin{document}$ N $\end{document}-body initial states.

     
    more » « less
  2. It is shown that for any positive integer \begin{document}$ n \ge 3 $\end{document}, there is a stable irreducible \begin{document}$ n\times n $\end{document} matrix \begin{document}$ A $\end{document} with \begin{document}$ 2n+1-\lfloor\frac{n}{3}\rfloor $\end{document} nonzero entries exhibiting Turing instability. Moreover, when \begin{document}$ n = 3 $\end{document}, the result is best possible, i.e., every \begin{document}$ 3\times 3 $\end{document} stable matrix with five or fewer nonzero entries will not exhibit Turing instability. Furthermore, we determine all possible \begin{document}$ 3\times 3 $\end{document} irreducible sign pattern matrices with 6 nonzero entries which can be realized by a matrix \begin{document}$ A $\end{document} that exhibits Turing instability.

     
    more » « less
  3. Genetic variations in the COVID-19 virus are one of the main causes of the COVID-19 pandemic outbreak in 2020 and 2021. In this article, we aim to introduce a new type of model, a system coupled with ordinary differential equations (ODEs) and measure differential equation (MDE), stemming from the classical SIR model for the variants distribution. Specifically, we model the evolution of susceptible \begin{document}$ S $\end{document} and removed \begin{document}$ R $\end{document} populations by ODEs and the infected \begin{document}$ I $\end{document} population by a MDE comprised of a probability vector field (PVF) and a source term. In addition, the ODEs for \begin{document}$ S $\end{document} and \begin{document}$ R $\end{document} contains terms that are related to the measure \begin{document}$ I $\end{document}. We establish analytically the well-posedness of the coupled ODE-MDE system by using generalized Wasserstein distance. We give two examples to show that the proposed ODE-MDE model coincides with the classical SIR model in case of constant or time-dependent parameters as special cases.

     
    more » « less
  4. Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large \begin{document}$ N $\end{document}-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into \begin{document}$ N $\end{document} sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an \begin{document}$ \epsilon $\end{document}-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.

     
    more » « less
  5. This paper introduces a novel generative encoder (GE) framework for generative imaging and image processing tasks like image reconstruction, compression, denoising, inpainting, deblurring, and super-resolution. GE unifies the generative capacity of GANs and the stability of AEs in an optimization framework instead of stacking GANs and AEs into a single network or combining their loss functions as in existing literature. GE provides a novel approach to visualizing relationships between latent spaces and the data space. The GE framework is made up of a pre-training phase and a solving phase. In the former, a GAN with generator \begin{document}$ G $\end{document} capturing the data distribution of a given image set, and an AE network with encoder \begin{document}$ E $\end{document} that compresses images following the estimated distribution by \begin{document}$ G $\end{document} are trained separately, resulting in two latent representations of the data, denoted as the generative and encoding latent space respectively. In the solving phase, given noisy image \begin{document}$ x = \mathcal{P}(x^*) $\end{document}, where \begin{document}$ x^* $\end{document} is the target unknown image, \begin{document}$ \mathcal{P} $\end{document} is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image \begin{document}$ x $\end{document} in the compressed domain, i.e., given \begin{document}$ m = E(x) $\end{document}, the two latent spaces are unified via solving the optimization problem

    and the image \begin{document}$ x^* $\end{document} is recovered in a generative way via \begin{document}$ \hat{x}: = G(z^*)\approx x^* $\end{document}, where \begin{document}$ \lambda>0 $\end{document} is a hyperparameter. The unification of the two spaces allows improved performance against corresponding GAN and AE networks while visualizing interesting properties in each latent space.

     
    more » « less