In this paper, we propose a new class of operator factorization methods to discretize the integral fractional Laplacian
 Award ID(s):
 2007040
 NSFPAR ID:
 10335267
 Date Published:
 Journal Name:
 Information and Inference: A Journal of the IMA
 ISSN:
 20498772
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

for\begin{document}$ ( \Delta)^\frac{{ \alpha}}{{2}} $\end{document} . One main advantage is that our method can easily increase numerical accuracy by using highdegree Lagrange basis functions, but remain its scheme structure and computer implementation unchanged. Moreover, it results in a symmetric (multilevel) Toeplitz differentiation matrix, enabling efficient computation via the fast Fourier transforms. If constant or linear basis functions are used, our method has an accuracy of\begin{document}$ \alpha \in (0, 2) $\end{document} , while\begin{document}$ {\mathcal O}(h^2) $\end{document} for quadratic basis functions with\begin{document}$ {\mathcal O}(h^4) $\end{document} a small mesh size. This accuracy can be achieved for any\begin{document}$ h $\end{document} and can be further increased if higherdegree basis functions are chosen. Numerical experiments are provided to approximate the fractional Laplacian and solve the fractional Poisson problems. It shows that if the solution of fractional Poisson problem satisfies\begin{document}$ \alpha \in (0, 2) $\end{document} for\begin{document}$ u \in C^{m, l}(\bar{ \Omega}) $\end{document} and\begin{document}$ m \in {\mathbb N} $\end{document} , our method has an accuracy of\begin{document}$ 0 < l < 1 $\end{document} for constant and linear basis functions, while\begin{document}$ {\mathcal O}(h^{\min\{m+l, \, 2\}}) $\end{document} for quadratic basis functions. Additionally, our method can be readily applied to approximate the generalized fractional Laplacians with symmetric kernel function, and numerical study on the tempered fractional Poisson problem demonstrates its efficiency.\begin{document}$ {\mathcal O}(h^{\min\{m+l, \, 4\}}) $\end{document} 
Given only a finite collection of points sampled from a Riemannian manifold embedded in a Euclidean space, in this paper we propose a new method to numerically solve elliptic and parabolic partial differential equations (PDEs) supplemented with boundary conditions. Since the construction of triangulations on unknown manifolds can be both difficult and expensive, both in terms of computational and data requirements, our goal is to solve these problems without a triangulation. Instead, we rely only on using the sample points to define quadrature formulas on the unknown manifold. Our main tool is the diffusion maps algorithm. We reanalyze this wellknown method in a variational sense for manifolds with boundary. Our main result is that the variational diffusion maps graph Laplacian is a consistent estimator of the Dirichlet energy on the manifold. This improves upon previous results and provides a rigorous justification of the wellknown relationship between diffusion maps and the Neumann eigenvalue problem. Moreover, using semigeodesic coordinates we derive the first uniform asymptotic expansion of the diffusion maps kernel integral operator for manifolds with boundary. This expansion relies on a novel lemma which relates the extrinsic Euclidean distance to the coordinate norm in a normal collar of the boundary. We then use a recently developed method of estimating the distance to boundary function (notice that the boundary location is assumed to be unknown) to construct a consistent estimator for boundary integrals. Finally, by combining these various estimators, we illustrate how to impose Dirichlet and Neumann conditions for some common PDEs based on the Laplacian. Several numerical examples illustrate our theoretical findings.more » « less

Nie, Qing (Ed.)The analysis of singlecell genomics data presents several statistical challenges, and extensive efforts have been made to produce methods for the analysis of this data that impute missing values, address sampling issues and quantify and correct for noise. In spite of such efforts, no consensus on best practices has been established and all current approaches vary substantially based on the available data and empirical tests. The kNearest Neighbor Graph (kNNG) is often used to infer the identities of, and relationships between, cells and is the basis of many widely used dimensionalityreduction and projection methods. The kNNG has also been the basis for imputation methods using, e.g ., neighbor averaging and graph diffusion. However, due to the lack of an agreedupon optimal objective function for choosing hyperparameters, these methods tend to oversmooth data, thereby resulting in a loss of information with regard to cell identity and the specific genetogene patterns underlying regulatory mechanisms. In this paper, we investigate the tuning of kNN and diffusionbased denoising methods with a novel nonstochastic method for optimally preserving biologically relevant informative variance in singlecell data. The framework, Denoising Expression data with a Weighted Affinity Kernel and SelfSupervision (DEWÄKSS), uses a selfsupervised technique to tune its parameters. We demonstrate that denoising with optimal parameters selected by our objective function (i) is robust to preprocessing methods using data from established benchmarks, (ii) disentangles cellular identity and maintains robust clusters over dimensionreduction methods, (iii) maintains variance along several expression dimensions, unlike previous heuristicbased methods that tend to oversmooth data variance, and (iv) rarely involves diffusion but rather uses a fixed weighted kNN graph for denoising. Together, these findings provide a new understanding of kNN and diffusionbased denoising methods. Code and example data for DEWÄKSS is available at https://gitlab.com/Xparx/dewakss//tree/Tjarnberg2020branch .more » « less

null (Ed.)ABSTRACT We report threedimensional hydrodynamical simulations of shocks (${\cal M_{\rm shock}}\ge 4$) interacting with fractal multicloud layers. The evolution of shock–multicloud systems consists of four stages: a shocksplitting phase in which reflected and refracted shocks are generated, a compression phase in which the forward shock compresses cloud material, an expansion phase triggered by internal heating and shock reacceleration, and a mixing phase in which shear instabilities generate turbulence. We compare multicloud layers with narrow ($\sigma _{\rho }=1.9\bar{\rho }$) and wide ($\sigma _{\rho }=5.9\bar{\rho }$) lognormal density distributions characteristic of Mach ≈ 5 supersonic turbulence driven by solenoidal and compressive modes. Our simulations show that outflowing cloud material contains imprints of the density structure of their native environments. The dynamics and disruption of multicloud systems depend on the porosity and the number of cloudlets in the layers. ‘Solenoidal’ layers mix less, generate less turbulence, accelerate faster, and form a more coherent mixedgas shell than the more porous ‘compressive’ layers. Similarly, multicloud systems with more cloudlets quench mixing via a shielding effect and enhance momentum transfer. Mass loading of diffuse mixed gas is efficient in all models, but direct dense gas entrainment is highly inefficient. Dense gas only survives in compressive clouds, but has low speeds. If normalized with respect to the shockpassage time, the evolution shows invariance for shock Mach numbers ≥10 and different cloudgenerating seeds, and slightly weaker scaling for lower Mach numbers and thinner cloud layers. Multicloud systems also have better convergence properties than singlecloud systems, with a resolution of eight cells per cloud radius being sufficient to capture their overall dynamics.more » « less

Abstract In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularization. Higher order regularity can be obtained via replacing the Laplacian regulariser with a polyLaplacian regulariser. The methodology is readily adapted to graphs and here we consider graph polyLaplacian regularization in a fully supervised, nonparametric, noise corrupted, regression problem. In particular, given a dataset
and a set of noisy labels$$\{x_i\}_{i=1}^n$$ ${\left\{{x}_{i}\right\}}_{i=1}^{n}$ we let$$\{y_i\}_{i=1}^n\subset \mathbb {R}$$ ${\left\{{y}_{i}\right\}}_{i=1}^{n}\subset R$ be the minimizer of an energy which consists of a data fidelity term and an appropriately scaled graph polyLaplacian term. When$$u_n{:}\{x_i\}_{i=1}^n\rightarrow \mathbb {R}$$ ${u}_{n}:{\left\{{x}_{i}\right\}}_{i=1}^{n}\to R$ , for iid noise$$y_i = g(x_i)+\xi _i$$ ${y}_{i}=g\left({x}_{i}\right)+{\xi}_{i}$ , and using the geometric random graph, we identify (with high probability) the rate of convergence of$$\xi _i$$ ${\xi}_{i}$ to$$u_n$$ ${u}_{n}$g in the large data limit . Furthermore, our rate is close to the known rate of convergence in the usual smoothing spline model.$$n\rightarrow \infty $$ $n\to \infty $