In this paper, we propose a new class of operator factorization methods to discretize the integral fractional Laplacian \begin{document}$$ (- \Delta)^\frac{{ \alpha}}{{2}} $$\end{document} for \begin{document}$$ \alpha \in (0, 2) $$\end{document}. One main advantage is that our method can easily increase numerical accuracy by using high-degree Lagrange basis functions, but remain its scheme structure and computer implementation unchanged. Moreover, it results in a symmetric (multilevel) Toeplitz differentiation matrix, enabling efficient computation via the fast Fourier transforms. If constant or linear basis functions are used, our method has an accuracy of \begin{document}$$ {\mathcal O}(h^2) $$\end{document}, while \begin{document}$$ {\mathcal O}(h^4) $$\end{document} for quadratic basis functions with \begin{document}$ h $$\end{document} a small mesh size. This accuracy can be achieved for any \begin{document}$$ \alpha \in (0, 2) $$\end{document} and can be further increased if higher-degree basis functions are chosen. Numerical experiments are provided to approximate the fractional Laplacian and solve the fractional Poisson problems. It shows that if the solution of fractional Poisson problem satisfies \begin{document}$$ u \in C^{m, l}(\bar{ \Omega}) $$\end{document} for \begin{document}$$ m \in {\mathbb N} $$\end{document} and \begin{document}$$ 0 < l < 1 $$\end{document}, our method has an accuracy of \begin{document}$$ {\mathcal O}(h^{\min\{m+l, \, 2\}}) $$\end{document} for constant and linear basis functions, while \begin{document}$$ {\mathcal O}(h^{\min\{m+l, \, 4\}}) $$\end{document}$ for quadratic basis functions. Additionally, our method can be readily applied to approximate the generalized fractional Laplacians with symmetric kernel function, and numerical study on the tempered fractional Poisson problem demonstrates its efficiency.
more »
« less
Feedback particle filter for collective inference
The purpose of this paper is to describe the feedback particle filter algorithm for problems where there are a large number (\begin{document}$ M $$\end{document}) of non-interacting agents (targets) with a large number (\begin{document}$$ M $$\end{document}) of non-agent specific observations (measurements) that originate from these agents. In its basic form, the problem is characterized by data association uncertainty whereby the association between the observations and agents must be deduced in addition to the agent state. In this paper, the large-\begin{document}$$ M $$\end{document} limit is interpreted as a problem of collective inference. This viewpoint is used to derive the equation for the empirical distribution of the hidden agent states. A feedback particle filter (FPF) algorithm for this problem is presented and illustrated via numerical simulations. Results are presented for the Euclidean and the finite state-space cases, both in continuous-time settings. The classical FPF algorithm is shown to be the special case (with \begin{document}$$ M = 1 $$\end{document}) of these more general results. The simulations help show that the algorithm well approximates the empirical distribution of the hidden states for large \begin{document}$$ M $$\end{document}$.
more »
« less
- Award ID(s):
- 1761622
- PAR ID:
- 10340146
- Date Published:
- Journal Name:
- Foundations of Data Science
- Volume:
- 3
- Issue:
- 3
- ISSN:
- 2639-8001
- Page Range / eLocation ID:
- 543
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large \begin{document}$ N $$\end{document}-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into \begin{document}$$ N $$\end{document} sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an \begin{document}$$ \epsilon $$\end{document}$-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.more » « less
-
This paper introduces a novel generative encoder (GE) framework for generative imaging and image processing tasks like image reconstruction, compression, denoising, inpainting, deblurring, and super-resolution. GE unifies the generative capacity of GANs and the stability of AEs in an optimization framework instead of stacking GANs and AEs into a single network or combining their loss functions as in existing literature. GE provides a novel approach to visualizing relationships between latent spaces and the data space. The GE framework is made up of a pre-training phase and a solving phase. In the former, a GAN with generator \begin{document}$ G $$\end{document} capturing the data distribution of a given image set, and an AE network with encoder \begin{document}$$ E $$\end{document} that compresses images following the estimated distribution by \begin{document}$$ G $$\end{document} are trained separately, resulting in two latent representations of the data, denoted as the generative and encoding latent space respectively. In the solving phase, given noisy image \begin{document}$$ x = \mathcal{P}(x^*) $$\end{document}, where \begin{document}$$ x^* $$\end{document} is the target unknown image, \begin{document}$$ \mathcal{P} $$\end{document} is an operator adding an addictive, or multiplicative, or convolutional noise, or equivalently given such an image \begin{document}$$ x $$\end{document} in the compressed domain, i.e., given \begin{document}$$ m = E(x) $$\end{document}, the two latent spaces are unified via solving the optimization problem \begin{document}$$ z^* = \underset{z}{\mathrm{argmin}} \|E(G(z))-m\|_2^2+\lambda\|z\|_2^2 $$\end{document} and the image \begin{document}$$ x^* $$\end{document} is recovered in a generative way via \begin{document}$$ \hat{x}: = G(z^*)\approx x^* $$\end{document}, where \begin{document}$$ \lambda>0 $$\end{document}$ is a hyperparameter. The unification of the two spaces allows improved performance against corresponding GAN and AE networks while visualizing interesting properties in each latent space.more » « less
-
For any finite horizon Sinai billiard map \begin{document}$ T $$\end{document} on the two-torus, we find \begin{document}$$ t_*>1 $$\end{document} such that for each \begin{document}$$ t\in (0,t_*) $$\end{document} there exists a unique equilibrium state \begin{document}$$ \mu_t $$\end{document} for \begin{document}$$ - t\log J^uT $$\end{document}, and \begin{document}$$ \mu_t $$\end{document} is \begin{document}$$ T $$\end{document}-adapted. (In particular, the SRB measure is the unique equilibrium state for \begin{document}$$ - \log J^uT $$\end{document}.) We show that \begin{document}$$ \mu_t $$\end{document} is exponentially mixing for Hölder observables, and the pressure function \begin{document}$$ P(t) = \sup_\mu \{h_\mu -\int t\log J^uT d \mu\} $$\end{document} is analytic on \begin{document}$$ (0,t_*) $$\end{document}. In addition, \begin{document}$$ P(t) $$\end{document} is strictly convex if and only if \begin{document}$$ \log J^uT $$\end{document} is not \begin{document}$$ \mu_t $$\end{document}-a.e. cohomologous to a constant, while, if there exist \begin{document}$$ t_a\ne t_b $$\end{document} with \begin{document}$$ \mu_{t_a} = \mu_{t_b} $$\end{document}, then \begin{document}$$ P(t) $$\end{document} is affine on \begin{document}$$ (0,t_*) $$\end{document}. An additional sparse recurrence condition gives \begin{document}$$ \lim_{t\downarrow 0} P(t) = P(0) $$\end{document}$.more » « less
-
Realizing arbitrary $$d$$-dimensional dynamics by renormalization of $C^d$-perturbations of identityAny \begin{document}$ C^d $$\end{document} conservative map \begin{document}$$ f $$\end{document} of the \begin{document}$$ d $$\end{document}-dimensional unit ball \begin{document}$$ {\mathbb B}^d $$\end{document}, \begin{document}$$ d\geq 2 $$\end{document}, can be realized by renormalized iteration of a \begin{document}$$ C^d $$\end{document} perturbation of identity: there exists a conservative diffeomorphism of \begin{document}$$ {\mathbb B}^d $$\end{document}, arbitrarily close to identity in the \begin{document}$$ C^d $$\end{document} topology, that has a periodic disc on which the return dynamics after a \begin{document}$$ C^d $$\end{document} change of coordinates is exactly \begin{document}$$ f $$\end{document}$.more » « less
An official website of the United States government

