Genetic variations in the COVID19 virus are one of the main causes of the COVID19 pandemic outbreak in 2020 and 2021. In this article, we aim to introduce a new type of model, a system coupled with ordinary differential equations (ODEs) and measure differential equation (MDE), stemming from the classical SIR model for the variants distribution. Specifically, we model the evolution of susceptible
Many dynamical systems described by nonlinear ODEs are unstable. Their associated solutions do not converge towards an equilibrium point, but rather converge towards some invariant subset of the state space called an attractor set. For a given ODE, in general, the existence, shape and structure of the attractor sets of the ODE are unknown. Fortunately, the sublevel sets of Lyapunov functions can provide bounds on the attractor sets of ODEs. In this paper we propose a new Lyapunov characterization of attractor sets that is well suited to the problem of finding the minimal attractor set. We show our Lyapunov characterization is nonconservative even when restricted to SumofSquares (SOS) Lyapunov functions. Given these results, we propose a SOS programming problem based on determinant maximization that yields an SOS Lyapunov function whose
 Award ID(s):
 1931270
 NSFPAR ID:
 10354588
 Date Published:
 Journal Name:
 Journal of Computational Dynamics
 Volume:
 0
 Issue:
 0
 ISSN:
 21582491
 Page Range / eLocation ID:
 0
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

and removed\begin{document}$ S $\end{document} populations by ODEs and the infected\begin{document}$ R $\end{document} population by a MDE comprised of a probability vector field (PVF) and a source term. In addition, the ODEs for\begin{document}$ I $\end{document} and\begin{document}$ S $\end{document} contains terms that are related to the measure\begin{document}$ R $\end{document} . We establish analytically the wellposedness of the coupled ODEMDE system by using generalized Wasserstein distance. We give two examples to show that the proposed ODEMDE model coincides with the classical SIR model in case of constant or timedependent parameters as special cases.\begin{document}$ I $\end{document} 
In this paper, we propose a new class of operator factorization methods to discretize the integral fractional Laplacian
for\begin{document}$ ( \Delta)^\frac{{ \alpha}}{{2}} $\end{document} . One main advantage is that our method can easily increase numerical accuracy by using highdegree Lagrange basis functions, but remain its scheme structure and computer implementation unchanged. Moreover, it results in a symmetric (multilevel) Toeplitz differentiation matrix, enabling efficient computation via the fast Fourier transforms. If constant or linear basis functions are used, our method has an accuracy of\begin{document}$ \alpha \in (0, 2) $\end{document} , while\begin{document}$ {\mathcal O}(h^2) $\end{document} for quadratic basis functions with\begin{document}$ {\mathcal O}(h^4) $\end{document} a small mesh size. This accuracy can be achieved for any\begin{document}$ h $\end{document} and can be further increased if higherdegree basis functions are chosen. Numerical experiments are provided to approximate the fractional Laplacian and solve the fractional Poisson problems. It shows that if the solution of fractional Poisson problem satisfies\begin{document}$ \alpha \in (0, 2) $\end{document} for\begin{document}$ u \in C^{m, l}(\bar{ \Omega}) $\end{document} and\begin{document}$ m \in {\mathbb N} $\end{document} , our method has an accuracy of\begin{document}$ 0 < l < 1 $\end{document} for constant and linear basis functions, while\begin{document}$ {\mathcal O}(h^{\min\{m+l, \, 2\}}) $\end{document} for quadratic basis functions. Additionally, our method can be readily applied to approximate the generalized fractional Laplacians with symmetric kernel function, and numerical study on the tempered fractional Poisson problem demonstrates its efficiency.\begin{document}$ {\mathcal O}(h^{\min\{m+l, \, 4\}}) $\end{document} 
This paper introduces a novel generative encoder (GE) framework for generative imaging and image processing tasks like image reconstruction, compression, denoising, inpainting, deblurring, and superresolution. GE unifies the generative capacity of GANs and the stability of AEs in an optimization framework instead of stacking GANs and AEs into a single network or combining their loss functions as in existing literature. GE provides a novel approach to visualizing relationships between latent spaces and the data space. The GE framework is made up of a pretraining phase and a solving phase. In the former, a GAN with generator
capturing the data distribution of a given image set, and an AE network with encoder\begin{document}$ G $\end{document} that compresses images following the estimated distribution by\begin{document}$ E $\end{document} are trained separately, resulting in two latent representations of the data, denoted as the generative and encoding latent space respectively. In the solving phase, given noisy image\begin{document}$ G $\end{document} , where\begin{document}$ x = \mathcal{P}(x^*) $\end{document} is the target unknown image,\begin{document}$ x^* $\end{document} is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image\begin{document}$ \mathcal{P} $\end{document} in the compressed domain, i.e., given\begin{document}$ x $\end{document} , the two latent spaces are unified via solving the optimization problem\begin{document}$ m = E(x) $\end{document} and the image
is recovered in a generative way via\begin{document}$ x^* $\end{document} , where\begin{document}$ \hat{x}: = G(z^*)\approx x^* $\end{document} is a hyperparameter. The unification of the two spaces allows improved performance against corresponding GAN and AE networks while visualizing interesting properties in each latent space.\begin{document}$ \lambda>0 $\end{document} 
This paper investigates the global existence of weak solutions for the incompressible
NavierStokes equations in\begin{document}$ p $\end{document} \begin{document}$ \mathbb{R}^d $\end{document} . The\begin{document}$ (2\leq d\leq p) $\end{document} NavierStokes equations are obtained by adding viscosity term to the\begin{document}$ p $\end{document} Euler equations. The diffusion added is represented by the\begin{document}$ p $\end{document} Laplacian of velocity and the\begin{document}$ p $\end{document} Euler equations are derived as the EulerLagrange equations for the action represented by the BenamouBrenier characterization of Wasserstein\begin{document}$ p $\end{document} distances with constraint density to be characteristic functions.\begin{document}$ p $\end{document} 
null (Ed.)
In two dimensions, we consider the problem of inversion of the attenuated
ray transform of a compactly supported function from data restricted to lines leaning on a given arc. We provide a method to reconstruct the function on the convex hull of this arc. The attenuation is assumed known. The method of proof uses the Hilbert transform associated with\begin{document}$ X $\end{document} analytic functions in the sense of Bukhgeim.\begin{document}$ A $\end{document}