skip to main content


Title: Spectral Stability of the $${\overline{\partial }}-$$Neumann Laplacian: Domain Perturbations
Abstract

We study spectral stability of the$${\bar{\partial }}$$¯-Neumann Laplacian on a bounded domain in$${\mathbb {C}}^n$$Cnwhen the underlying domain is perturbed. In particular, we establish upper semi-continuity properties for the variational eigenvalues of the$${\bar{\partial }}$$¯-Neumann Laplacian on bounded pseudoconvex domains in$${\mathbb {C}}^n$$Cn, lower semi-continuity properties on pseudoconvex domains that satisfy property (P), and quantitative estimates on smooth bounded pseudoconvex domains of finite D’Angelo type in$${\mathbb {C}}^n$$Cn.

 
more » « less
Award ID(s):
2055538
NSF-PAR ID:
10361689
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
The Journal of Geometric Analysis
Volume:
32
Issue:
2
ISSN:
1050-6926
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Finite volume, weighted essentially non-oscillatory (WENO) schemes require the computation of a smoothness indicator. This can be expensive, especially in multiple space dimensions. We consider the use of the simple smoothness indicator$$\sigma ^{\textrm{S}}= \frac{1}{N_{\textrm{S}}-1}\sum _{j} ({\bar{u}}_{j} - {\bar{u}}_{m})^2$$σS=1NS-1j(u¯j-u¯m)2, where$$N_{\textrm{S}}$$NSis the number of mesh elements in the stencil,$${\bar{u}}_j$$u¯jis the local function average over mesh elementj, and indexmgives the target element. Reconstructions utilizing standard WENO weighting fail with this smoothness indicator. We develop a modification of WENO-Z weighting that gives a reliable and accurate reconstruction of adaptive order, which we denote as SWENOZ-AO. We prove that it attains the order of accuracy of the large stencil polynomial approximation when the solution is smooth, and drops to the order of the small stencil polynomial approximations when there is a jump discontinuity in the solution. Numerical examples in one and two space dimensions on general meshes verify the approximation properties of the reconstruction. They also show it to be about 10 times faster in two space dimensions than reconstructions using the classic smoothness indicator. The new reconstruction is applied to define finite volume schemes to approximate the solution of hyperbolic conservation laws. Numerical tests show results of the same quality as standard WENO schemes using the classic smoothness indicator, but with an overall speedup in the computation time of about 3.5–5 times in 2D tests. Moreover, the computational efficiency (CPU time versus error) is noticeably improved.

     
    more » « less
  2. Abstract

    The free multiplicative Brownian motion$$b_{t}$$btis the large-Nlimit of the Brownian motion on$$\mathsf {GL}(N;\mathbb {C}),$$GL(N;C),in the sense of$$*$$-distributions. The natural candidate for the large-Nlimit of the empirical distribution of eigenvalues is thus the Brown measure of$$b_{t}$$bt. In previous work, the second and third authors showed that this Brown measure is supported in the closure of a region$$\Sigma _{t}$$Σtthat appeared in the work of Biane. In the present paper, we compute the Brown measure completely. It has a continuous density$$W_{t}$$Wton$$\overline{\Sigma }_{t},$$Σ¯t,which is strictly positive and real analytic on$$\Sigma _{t}$$Σt. This density has a simple form in polar coordinates:$$\begin{aligned} W_{t}(r,\theta )=\frac{1}{r^{2}}w_{t}(\theta ), \end{aligned}$$Wt(r,θ)=1r2wt(θ),where$$w_{t}$$wtis an analytic function determined by the geometry of the region$$\Sigma _{t}$$Σt. We show also that the spectral measure of free unitary Brownian motion$$u_{t}$$utis a “shadow” of the Brown measure of$$b_{t}$$bt, precisely mirroring the relationship between the circular and semicircular laws. We develop several new methods, based on stochastic differential equations and PDE, to prove these results.

     
    more » « less
  3. Abstract

    It has been recently established in David and Mayboroda (Approximation of green functions and domains with uniformly rectifiable boundaries of all dimensions.arXiv:2010.09793) that on uniformly rectifiable sets the Green function is almost affine in the weak sense, and moreover, in some scenarios such Green function estimates are equivalent to the uniform rectifiability of a set. The present paper tackles a strong analogue of these results, starting with the “flagship degenerate operators on sets with lower dimensional boundaries. We consider the elliptic operators$$L_{\beta ,\gamma } =- {\text {div}}D^{d+1+\gamma -n} \nabla $$Lβ,γ=-divDd+1+γ-nassociated to a domain$$\Omega \subset {\mathbb {R}}^n$$ΩRnwith a uniformly rectifiable boundary$$\Gamma $$Γof dimension$$d < n-1$$d<n-1, the now usual distance to the boundary$$D = D_\beta $$D=Dβgiven by$$D_\beta (X)^{-\beta } = \int _{\Gamma } |X-y|^{-d-\beta } d\sigma (y)$$Dβ(X)-β=Γ|X-y|-d-βdσ(y)for$$X \in \Omega $$XΩ, where$$\beta >0$$β>0and$$\gamma \in (-1,1)$$γ(-1,1). In this paper we show that the Green functionGfor$$L_{\beta ,\gamma }$$Lβ,γ, with pole at infinity, is well approximated by multiples of$$D^{1-\gamma }$$D1-γ, in the sense that the function$$\big | D\nabla \big (\ln \big ( \frac{G}{D^{1-\gamma }} \big )\big )\big |^2$$|D(ln(GD1-γ))|2satisfies a Carleson measure estimate on$$\Omega $$Ω. We underline that the strong and the weak results are different in nature and, of course, at the level of the proofs: the latter extensively used compactness arguments, while the present paper relies on some intricate integration by parts and the properties of the “magical distance function from David et al. (Duke Math J, to appear).

     
    more » « less
  4. Abstract

    The Loewner framework is one of the most successful data-driven model order reduction techniques. IfNis the cardinality of a given data set, the so-called Loewner and shifted Loewner matrices$${\mathbb {L}}\in {\mathbb {C}}^{N\times N}$$LCN×Nand$${\mathbb {S}}\in {\mathbb {C}}^{N\times N}$$SCN×Ncan be defined by solely relying on information encoded in the considered data set and they play a crucial role in the computation of the sought rational model approximation.In particular, the singular value decomposition of a linear combination of$${\mathbb {S}}$$Sand$${\mathbb {L}}$$Lprovides the tools needed to construct accurate models which fulfill important approximation properties with respect to the original data set. However, for highly-sampled data sets, the dense nature of$${\mathbb {L}}$$Land$${\mathbb {S}}$$Sleads to numerical difficulties, namely the failure to allocate these matrices in certain memory-limited environments or excessive computational costs. Even though they do not possess any sparsity pattern, the Loewner and shifted Loewner matrices are extremely structured and, in this paper, we show how to fully exploit their Cauchy-like structure to reduce the cost of computing accurate rational models while avoiding the explicit allocation of$${\mathbb {L}}$$Land$${\mathbb {S}}$$S. In particular, the use of thehierarchically semiseparableformat allows us to remarkably lower both the computational cost and the memory requirements of the Loewner framework obtaining a novel scheme whose costs scale with$$N \log N$$NlogN.

     
    more » « less
  5. Abstract

    In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:

    Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$O~(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$O~(n1.5)and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time).

    Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$O~(nk/2)(where$$k\ge 3$$k3). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$O~(m). Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.

    Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$O~(n2). Note this is optimal, as the matrix can have$$\Omega (n^2)$$Ω(n2)nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$O~(n2.94)nondeterministic time algorithm.

    Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$2n/2-n/O(k). We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$2n/2·poly(n)-time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$#SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$2n/2/nω(1)time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.

    Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$24n/5·poly(n). Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$22n/3·poly(n)time.

    Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$2n/3·poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$20.49991n·poly(n)time.

     
    more » « less