skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimal error estimates of the semidiscrete discontinuous Galerkin methods for two dimensional hyperbolic equations on Cartesian meshes using P k elements
In this paper, we study the optimal error estimates of the classical discontinuous Galerkin method for time-dependent 2-D hyperbolic equations using P k elements on uniform Cartesian meshes, and prove that the error in the L 2 norm achieves optimal ( k  + 1)th order convergence when upwind fluxes are used. For the linear constant coefficient case, the results hold true for arbitrary piecewise polynomials of degree k  ≥ 0. For variable coefficient and nonlinear cases, we give the proof for piecewise polynomials of degree k  = 0, 1, 2, 3 and k  = 2, 3, respectively, under the condition that the wind direction does not change. The theoretical results are verified by numerical examples.  more » « less
Award ID(s):
1719410
PAR ID:
10168287
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ESAIM: Mathematical Modelling and Numerical Analysis
Volume:
54
Issue:
2
ISSN:
0764-583X
Page Range / eLocation ID:
705 to 726
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this paper, we study the central discontinuous Galerkin (DG) method on overlapping meshes for second order wave equations. We consider the first order hyperbolic system, which is equivalent to the second order scalar equation, and construct the corresponding central DG scheme. We then provide the stability analysis and the optimal error estimates for the proposed central DG scheme for one- and multi-dimensional cases with piecewise P k elements. The optimal error estimates are valid for uniform Cartesian meshes and polynomials of arbitrary degree k  ≥ 0. In particular, we adopt the techniques in Liu et al . ( SIAM J. Numer. Anal. 56 (2018) 520–541; ESAIM: M2AN 54 (2020) 705–726) and obtain the local projection that is crucial in deriving the optimal order of convergence. The construction of the projection here is more challenging since the unknowns are highly coupled in the proposed scheme. Dispersion analysis is performed on the proposed scheme for one dimensional problems, indicating that the numerical solution with P 1 elements reaches its minimum with a suitable parameter in the dissipation term. Several numerical examples including accuracy tests and long time simulation are presented to validate the theoretical results. 
    more » « less
  2. Abstract This paper constructs and analyzes a boundary correction finite element method for the Stokes problem based on the Scott–Vogelius pair on Clough–Tocher splits. The velocity space consists of continuous piecewise polynomials of degree k , and the pressure space consists of piecewise polynomials of degree ( k – 1) without continuity constraints. A Lagrange multiplier space that consists of continuous piecewise polynomials with respect to the boundary partition is introduced to enforce boundary conditions and to mitigate the lack of pressure-robustness. We prove several inf-sup conditions, leading to the well-posedness of the method. In addition, we show that the method converges with optimal order and the velocity approximation is divergence-free. 
    more » « less
  3. We prove two new results about the inability of low-degree polynomials to uniformly approximate constant-depth circuits, even to slightly-better-than-trivial error. First, we prove a tight Omega~(n^{1/2}) lower bound on the threshold degree of the SURJECTIVITY function on n variables. This matches the best known threshold degree bound for any AC^0 function, previously exhibited by a much more complicated circuit of larger depth (Sherstov, FOCS 2015). Our result also extends to a 2^{Omega~(n^{1/2})} lower bound on the sign-rank of an AC^0 function, improving on the previous best bound of 2^{Omega(n^{2/5})} (Bun and Thaler, ICALP 2016). Second, for any delta>0, we exhibit a function f : {-1,1}^n -> {-1,1} that is computed by a circuit of depth O(1/delta) and is hard to approximate by polynomials in the following sense: f cannot be uniformly approximated to error epsilon=1-2^{-Omega(n^{1-delta})}, even by polynomials of degree n^{1-delta}. Our recent prior work (Bun and Thaler, FOCS 2017) proved a similar lower bound, but which held only for error epsilon=1/3. Our result implies 2^{Omega(n^{1-delta})} lower bounds on the complexity of AC^0 under a variety of basic measures such as discrepancy, margin complexity, and threshold weight. This nearly matches the trivial upper bound of 2^{O(n)} that holds for every function. The previous best lower bound on AC^0 for these measures was 2^{Omega(n^{1/2})} (Sherstov, FOCS 2015). Additional applications in learning theory, communication complexity, and cryptography are described. 
    more » « less
  4. null (Ed.)
    We present approximation and exact algorithms for piecewise regression of univariate and bivariate data using fixed-degree polynomials. Specifically, given a set S of n data points (x1, y1), . . . , (xn, yn) ∈ Rd × R where d ∈ {1, 2}, the goal is to segment xi’s into some (arbitrary) number of disjoint pieces P1, . . . , Pk, where each piece Pj is associated with a fixed-degree polynomial fj : Rd → R, to minimize the total loss function λk+􏰄ni=1(yi −f(xi))2, where λ ≥ 0 is a regularization term that penalizes model complexity (number of pieces) and f : 􏰇kj=1 Pj → R is the piecewise polynomial function defined as f|Pj = fj. The pieces P1,...,Pk are disjoint intervals of R in the case of univariate data and disjoint axis-aligned rectangles in the case of bivariate data. Our error approximation allows use of any fixed-degree polynomial, not just linear functions. Our main results are the following. For univariate data, we present a (1 + ε)-approximation algorithm with time complexity O(nε log1ε), assuming that data is presented in sorted order of xi’s. For bivariate data, we √ present three results: a sub-exponential exact algorithm with running time nO( n); a polynomial-time constant- approximation algorithm; and a quasi-polynomial time approximation scheme (QPTAS). The bivariate case is believed to be NP-hard in the folklore but we could not find a published record in the literature, so in this paper we also present a hardness proof for completeness. 
    more » « less
  5. We prove that the most natural low-degree test for polynomials over finite fields is “robust” in the high-error regime for linear-sized fields. Specifically we consider the “local” agreement of a function $$f:\mathbb{F}_{q}^{m}\rightarrow \mathbb{F}_{q}$$ from the space of degree-d polynomials, i.e., the expected agreement of the function from univariate degree-d polynomials over a randomly chosen line in $$\mathbb{F}_{q}^{m}$$, and prove that if this local agreement is $$\varepsilon\geq\Omega((d/q)^{\tau}))$$ for some fixed $$\tau > 0$$, then there is a global degree-d polynomial $$Q:\mathbb{F}_{q}^{m}\rightarrow \mathbb{F}_{q}$$ with agreement nearly $$\varepsilon$$ with $$f$$. This settles a long-standing open question in the area of low-degree testing, yielding an $O(d)$ -query robust test in the “high-error” regime (i.e., when $$\varepsilon < 1/2)$$. The previous results in this space either required $$\varepsilon > 1/2$$ (Polishchuk & Spielman, STOC 1994), or $$q=\Omega(d^{4})$$ (Arora & Sudan, Combinatorica 2003), orneeded to measure local distance on 2-dimensional “planes” rather than one-dimensional lines leading to $$\Omega(d^{2})$$ -query complexity (Raz & Safra, STOC 1997). Our analysis follows the spirit of most previous analyses in first analyzing the low-variable case $(m=O(1))$ and then “boot-strapping” to general multivariate settings. Our main technical novelty is a new analysis in the bivariate setting that exploits a previously known connection between multivariate factorization and finding (or testing) low-degree polynomials, in a non “black-box” manner. This connection was used roughly in a black-box manner in the work of Arora & Sudan — and we show that opening up this black box and making some delicate choices in the analysis leads to our essentially optimal analysis. A second contribution is a bootstrapping analysis which manages to lift analyses for $m=2$ directly to analyses for general $$m$$, where previous works needed to work with $m=3$ or $m=4$ — arguably this bootstrapping is significantly simpler than those in prior works. 
    more » « less