It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.
Finite volume, weighted essentially non-oscillatory (WENO) schemes require the computation of a smoothness indicator. This can be expensive, especially in multiple space dimensions. We consider the use of the simple smoothness indicator
- Award ID(s):
- 1912735
- NSF-PAR ID:
- 10446327
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Journal of Scientific Computing
- Volume:
- 97
- Issue:
- 1
- ISSN:
- 0885-7474
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract arXiv:2010.09793 ) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators associated to a domain$$L_{\beta ,\gamma } =- {\text {div}}D^{d+1+\gamma -n} \nabla $$ with a uniformly rectifiable boundary$$\Omega \subset {\mathbb {R}}^n$$ of dimension$$\Gamma $$ , the now usual distance to the boundary$$d < n-1$$ given by$$D = D_\beta $$ for$$D_\beta (X)^{-\beta } = \int _{\Gamma } |X-y|^{-d-\beta } d\sigma (y)$$ , where$$X \in \Omega $$ and$$\beta >0$$ . In this paper we show that the Green function$$\gamma \in (-1,1)$$ G for , with pole at infinity, is well approximated by multiples of$$L_{\beta ,\gamma }$$ , in the sense that the function$$D^{1-\gamma }$$ satisfies a Carleson measure estimate on$$\big | D\nabla \big (\ln \big ( \frac{G}{D^{1-\gamma }} \big )\big )\big |^2$$ . We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).$$\Omega $$ -
Abstract The free multiplicative Brownian motion
is the large-$$b_{t}$$ N limit of the Brownian motion on in the sense of$$\mathsf {GL}(N;\mathbb {C}),$$ -distributions. The natural candidate for the large-$$*$$ N limit of the empirical distribution of eigenvalues is thus the Brown measure of . In previous work, the second and third authors showed that this Brown measure is supported in the closure of a region$$b_{t}$$ that appeared in the work of Biane. In the present paper, we compute the Brown measure completely. It has a continuous density$$\Sigma _{t}$$ on$$W_{t}$$ which is strictly positive and real analytic on$$\overline{\Sigma }_{t},$$ . This density has a simple form in polar coordinates:$$\Sigma _{t}$$ where$$\begin{aligned} W_{t}(r,\theta )=\frac{1}{r^{2}}w_{t}(\theta ), \end{aligned}$$ is an analytic function determined by the geometry of the region$$w_{t}$$ . We show also that the spectral measure of free unitary Brownian motion$$\Sigma _{t}$$ is a “shadow” of the Brown measure of$$u_{t}$$ , precisely mirroring the relationship between the circular and semicircular laws. We develop several new methods, based on stochastic differential equations and PDE, to prove these results.$$b_{t}$$ -
Abstract For a smooth projective variety
X over an algebraic number fieldk a conjecture of Bloch and Beilinson predicts that the kernel of the Albanese map ofX is a torsion group. In this article we consider a product of smooth projective curves and show that if the conjecture is true for any subproduct of two curves, then it is true for$$X=C_1\times \cdots \times C_d$$ X . For a product of two curves over$$X=C_1\times C_2$$ with positive genus we construct many nontrivial examples that satisfy the weaker property that the image of the natural map$$\mathbb {Q} $$ is finite, where$$J_1(\mathbb {Q})\otimes J_2(\mathbb {Q})\xrightarrow {\varepsilon }{{\,\textrm{CH}\,}}_0(C_1\times C_2)$$ is the Jacobian variety of$$J_i$$ . Our constructions include many new examples of non-isogenous pairs of elliptic curves$$C_i$$ with positive rank, including the first known examples of rank greater than 1. Combining these constructions with our previous result, we obtain infinitely many nontrivial products$$E_1, E_2$$ for which the analogous map$$X=C_1\times \cdots \times C_d$$ has finite image.$$\varepsilon $$ -
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
Abstract Leptoquarks (
s) are hypothetical particles that appear in various extensions of the Standard Model (SM), that can explain observed differences between SM theory predictions and experimental results. The production of these particles has been widely studied at various experiments, most recently at the Large Hadron Collider (LHC), and stringent bounds have been placed on their masses and couplings, assuming the simplest beyond-SM (BSM) hypotheses. However, the limits are significantly weaker for$$\textrm{LQ}$$ models with family non-universal couplings containing enhanced couplings to third-generation fermions. We present a new study on the production of a$$\textrm{LQ}$$ at the LHC, with preferential couplings to third-generation fermions, considering proton-proton collisions at$$\textrm{LQ}$$ and$$\sqrt{s} = 13 \, \textrm{TeV}$$ . Such a hypothesis is well motivated theoretically and it can explain the recent anomalies in the precision measurements of$$\sqrt{s} = 13.6 \, \textrm{TeV}$$ -meson decay rates, specifically the$$\textrm{B}$$ ratios. Under a simplified model where the$$R_{D^{(*)}}$$ masses and couplings are free parameters, we focus on cases where the$$\textrm{LQ}$$ decays to a$$\textrm{LQ}$$ lepton and a$$\tau $$ quark, and study how the results are affected by different assumptions about chiral currents and interference effects with other BSM processes with the same final states, such as diagrams with a heavy vector boson,$$\textrm{b}$$ . The analysis is performed using machine learning techniques, resulting in an increased discovery reach at the LHC, allowing us to probe new physics phase space which addresses the$$\textrm{Z}^{\prime }$$ -meson anomalies, for$$\textrm{B}$$ masses up to$$\textrm{LQ}$$ , for the high luminosity LHC scenario.$$5.00\, \textrm{TeV}$$