We consider the wellknown LiebLiniger (LL) model for
The disparity in the impact of COVID19 on minority populations in the United States has been well established in the available data on deaths, case counts, and adverse outcomes. However, critical metrics used by public health officials and epidemiologists, such as a time dependent viral reproductive number (
 Award ID(s):
 1722578
 NSFPAR ID:
 10297733
 Date Published:
 Journal Name:
 Foundations of Data Science
 Volume:
 3
 Issue:
 3
 ISSN:
 26398001
 Page Range / eLocation ID:
 479
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

bosons interacting pairwise on the line via the\begin{document}$ N $\end{document} potential in the meanfield scaling regime. Assuming suitable asymptotic factorization of the initial wave functions and convergence of the microscopic energy per particle, we show that the timedependent reduced density matrices of the system converge in trace norm to the pure states given by the solution to the onedimensional cubic nonlinear SchrÃ¶dinger equation (NLS) with an explict rate of convergence. In contrast to previous work [\begin{document}$ \delta $\end{document} 3 ] relying on the formalism of second quantization and coherent states and without an explicit rate, our proof is based on the counting method of Pickl [65 ,66 ,67 ] and Knowles and Pickl [44 ]. To overcome difficulties stemming from the singularity of the potential, we introduce a new shortrange approximation argument that exploits the HÃ¶lder continuity of the\begin{document}$ \delta $\end{document} body wave function in a single particle variable. By further exploiting the\begin{document}$ N $\end{document} subcritical wellposedness theory for the 1D cubic NLS, we can prove meanfield convergence when the limiting solution to the NLS has finite mass, but only for a very special class of\begin{document}$ L^2 $\end{document} body initial states.\begin{document}$ N $\end{document} 
It is shown that for any positive integer
, there is a stable irreducible\begin{document}$ n \ge 3 $\end{document} matrix\begin{document}$ n\times n $\end{document} with\begin{document}$ A $\end{document} nonzero entries exhibiting Turing instability. Moreover, when\begin{document}$ 2n+1\lfloor\frac{n}{3}\rfloor $\end{document} , the result is best possible, i.e., every\begin{document}$ n = 3 $\end{document} stable matrix with five or fewer nonzero entries will not exhibit Turing instability. Furthermore, we determine all possible\begin{document}$ 3\times 3 $\end{document} irreducible sign pattern matrices with 6 nonzero entries which can be realized by a matrix\begin{document}$ 3\times 3 $\end{document} that exhibits Turing instability.\begin{document}$ A $\end{document} 
Genetic variations in the COVID19 virus are one of the main causes of the COVID19 pandemic outbreak in 2020 and 2021. In this article, we aim to introduce a new type of model, a system coupled with ordinary differential equations (ODEs) and measure differential equation (MDE), stemming from the classical SIR model for the variants distribution. Specifically, we model the evolution of susceptible
and removed\begin{document}$ S $\end{document} populations by ODEs and the infected\begin{document}$ R $\end{document} population by a MDE comprised of a probability vector field (PVF) and a source term. In addition, the ODEs for\begin{document}$ I $\end{document} and\begin{document}$ S $\end{document} contains terms that are related to the measure\begin{document}$ R $\end{document} . We establish analytically the wellposedness of the coupled ODEMDE system by using generalized Wasserstein distance. We give two examples to show that the proposed ODEMDE model coincides with the classical SIR model in case of constant or timedependent parameters as special cases.\begin{document}$ I $\end{document} 
Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large
player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into\begin{document}$ N $\end{document} suboptimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an\begin{document}$ N $\end{document} Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.\begin{document}$ \epsilon $\end{document} 
This paper introduces a novel generative encoder (GE) framework for generative imaging and image processing tasks like image reconstruction, compression, denoising, inpainting, deblurring, and superresolution. GE unifies the generative capacity of GANs and the stability of AEs in an optimization framework instead of stacking GANs and AEs into a single network or combining their loss functions as in existing literature. GE provides a novel approach to visualizing relationships between latent spaces and the data space. The GE framework is made up of a pretraining phase and a solving phase. In the former, a GAN with generator
capturing the data distribution of a given image set, and an AE network with encoder\begin{document}$ G $\end{document} that compresses images following the estimated distribution by\begin{document}$ E $\end{document} are trained separately, resulting in two latent representations of the data, denoted as the generative and encoding latent space respectively. In the solving phase, given noisy image\begin{document}$ G $\end{document} , where\begin{document}$ x = \mathcal{P}(x^*) $\end{document} is the target unknown image,\begin{document}$ x^* $\end{document} is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image\begin{document}$ \mathcal{P} $\end{document} in the compressed domain, i.e., given\begin{document}$ x $\end{document} , the two latent spaces are unified via solving the optimization problem\begin{document}$ m = E(x) $\end{document} and the image
is recovered in a generative way via\begin{document}$ x^* $\end{document} , where\begin{document}$ \hat{x}: = G(z^*)\approx x^* $\end{document} is a hyperparameter. The unification of the two spaces allows improved performance against corresponding GAN and AE networks while visualizing interesting properties in each latent space.\begin{document}$ \lambda>0 $\end{document}