skip to main content


Title: Multidimensional WENO-AO Reconstructions Using a Simplified Smoothness Indicator and Applications to Conservation Laws
Abstract

Finite volume, weighted essentially non-oscillatory (WENO) schemes require the computation of a smoothness indicator. This can be expensive, especially in multiple space dimensions. We consider the use of the simple smoothness indicator$$\sigma ^{\textrm{S}}= \frac{1}{N_{\textrm{S}}-1}\sum _{j} ({\bar{u}}_{j} - {\bar{u}}_{m})^2$$σS=1NS-1j(u¯j-u¯m)2, where$$N_{\textrm{S}}$$NSis the number of mesh elements in the stencil,$${\bar{u}}_j$$u¯jis the local function average over mesh elementj, and indexmgives the target element. Reconstructions utilizing standard WENO weighting fail with this smoothness indicator. We develop a modification of WENO-Z weighting that gives a reliable and accurate reconstruction of adaptive order, which we denote as SWENOZ-AO. We prove that it attains the order of accuracy of the large stencil polynomial approximation when the solution is smooth, and drops to the order of the small stencil polynomial approximations when there is a jump discontinuity in the solution. Numerical examples in one and two space dimensions on general meshes verify the approximation properties of the reconstruction. They also show it to be about 10 times faster in two space dimensions than reconstructions using the classic smoothness indicator. The new reconstruction is applied to define finite volume schemes to approximate the solution of hyperbolic conservation laws. Numerical tests show results of the same quality as standard WENO schemes using the classic smoothness indicator, but with an overall speedup in the computation time of about 3.5–5 times in 2D tests. Moreover, the computational efficiency (CPU time versus error) is noticeably improved.

 
more » « less
Award ID(s):
1912735
NSF-PAR ID:
10446327
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Scientific Computing
Volume:
97
Issue:
1
ISSN:
0885-7474
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.arXiv:2010.09793) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators$$L_{\beta ,\gamma } =- {\text {div}}D^{d+1+\gamma -n} \nabla $$Lβ,γ=-divDd+1+γ-nassociated to a domain$$\Omega \subset {\mathbb {R}}^n$$ΩRnwith a uniformly rectifiable boundary$$\Gamma $$Γof dimension$$d < n-1$$d<n-1, the now usual distance to the boundary$$D = D_\beta $$D=Dβgiven by$$D_\beta (X)^{-\beta } = \int _{\Gamma } |X-y|^{-d-\beta } d\sigma (y)$$Dβ(X)-β=Γ|X-y|-d-βdσ(y)for$$X \in \Omega $$XΩ, where$$\beta >0$$β>0and$$\gamma \in (-1,1)$$γ(-1,1). In this paper we show that the Green functionGfor$$L_{\beta ,\gamma }$$Lβ,γ, with pole at infinity, is well approximated by multiples of$$D^{1-\gamma }$$D1-γ, in the sense that the function$$\big | D\nabla \big (\ln \big ( \frac{G}{D^{1-\gamma }} \big )\big )\big |^2$$|D(ln(GD1-γ))|2satisfies a Carleson measure estimate on$$\Omega $$Ω. We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).

     
    more » « less
  2. Abstract

    The free multiplicative Brownian motion$$b_{t}$$btis the large-Nlimit of the Brownian motion on$$\mathsf {GL}(N;\mathbb {C}),$$GL(N;C),in the sense of$$*$$-distributions. The natural candidate for the large-Nlimit of the empirical distribution of eigenvalues is thus the Brown measure of$$b_{t}$$bt. In previous work, the second and third authors showed that this Brown measure is supported in the closure of a region$$\Sigma _{t}$$Σtthat appeared in the work of Biane. In the present paper, we compute the Brown measure completely. It has a continuous density$$W_{t}$$Wton$$\overline{\Sigma }_{t},$$Σ¯t,which is strictly positive and real analytic on$$\Sigma _{t}$$Σt. This density has a simple form in polar coordinates:$$\begin{aligned} W_{t}(r,\theta )=\frac{1}{r^{2}}w_{t}(\theta ), \end{aligned}$$Wt(r,θ)=1r2wt(θ),where$$w_{t}$$wtis an analytic function determined by the geometry of the region$$\Sigma _{t}$$Σt. We show also that the spectral measure of free unitary Brownian motion$$u_{t}$$utis a “shadow” of the Brown measure of$$b_{t}$$bt, precisely mirroring the relationship between the circular and semicircular laws. We develop several new methods, based on stochastic differential equations and PDE, to prove these results.

     
    more » « less
  3. Abstract

    For a smooth projective varietyXover an algebraic number fieldka conjecture of Bloch and Beilinson predicts that the kernel of the Albanese map ofXis a torsion group. In this article we consider a product$$X=C_1\times \cdots \times C_d$$X=C1××Cdof smooth projective curves and show that if the conjecture is true for any subproduct of two curves, then it is true forX. For a product$$X=C_1\times C_2$$X=C1×C2of two curves over$$\mathbb {Q} $$Qwith positive genus we construct many nontrivial examples that satisfy the weaker property that the image of the natural map$$J_1(\mathbb {Q})\otimes J_2(\mathbb {Q})\xrightarrow {\varepsilon }{{\,\textrm{CH}\,}}_0(C_1\times C_2)$$J1(Q)J2(Q)εCH0(C1×C2)is finite, where$$J_i$$Jiis the Jacobian variety of$$C_i$$Ci. Our constructions include many new examples of non-isogenous pairs of elliptic curves$$E_1, E_2$$E1,E2with positive rank, including the first known examples of rank greater than 1. Combining these constructions with our previous result, we obtain infinitely many nontrivial products$$X=C_1\times \cdots \times C_d$$X=C1××Cdfor which the analogous map$$\varepsilon $$εhas finite image.

     
    more » « less
  4. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  5. Abstract

    Leptoquarks ($$\textrm{LQ}$$LQs) are hypothetical particles that appear in various extensions of the Standard Model (SM), that can explain observed differences between SM theory predictions and experimental results. The production of these particles has been widely studied at various experiments, most recently at the Large Hadron Collider (LHC), and stringent bounds have been placed on their masses and couplings, assuming the simplest beyond-SM (BSM) hypotheses. However, the limits are significantly weaker for$$\textrm{LQ}$$LQmodels with family non-universal couplings containing enhanced couplings to third-generation fermions. We present a new study on the production of a$$\textrm{LQ}$$LQat the LHC, with preferential couplings to third-generation fermions, considering proton-proton collisions at$$\sqrt{s} = 13 \, \textrm{TeV}$$s=13TeVand$$\sqrt{s} = 13.6 \, \textrm{TeV}$$s=13.6TeV. Such a hypothesis is well motivated theoretically and it can explain the recent anomalies in the precision measurements of$$\textrm{B}$$B-meson decay rates, specifically the$$R_{D^{(*)}}$$RD()ratios. Under a simplified model where the$$\textrm{LQ}$$LQmasses and couplings are free parameters, we focus on cases where the$$\textrm{LQ}$$LQdecays to a$$\tau $$τlepton and a$$\textrm{b}$$bquark, and study how the results are affected by different assumptions about chiral currents and interference effects with other BSM processes with the same final states, such as diagrams with a heavy vector boson,$$\textrm{Z}^{\prime }$$Z. The analysis is performed using machine learning techniques, resulting in an increased discovery reach at the LHC, allowing us to probe new physics phase space which addresses the$$\textrm{B}$$B-meson anomalies, for$$\textrm{LQ}$$LQmasses up to$$5.00\, \textrm{TeV}$$5.00TeV, for the high luminosity LHC scenario.

     
    more » « less