Discontinuous Galerkin Galerkin Differences for the Wave Equation in Second-Order Form
- Award ID(s):
- 2012296
- PAR ID:
- 10232745
- Date Published:
- Journal Name:
- SIAM Journal on Scientific Computing
- Volume:
- 43
- Issue:
- 2
- ISSN:
- 1064-8275
- Page Range / eLocation ID:
- A1497 to A1526
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
The discontinuous Petrov–Galerkin (DPG) method is a Petrov–Galerkin finite element method with test functions designed for obtaining stability. These test functions are computable locally, element by element, and are motivated by optimal test functions which attain the supremum in an inf-sup condition. A profound consequence of the use of nearly optimal test functions is that the DPG method can inherit the stability of the (undiscretized) variational formulation, be it coercive or not. This paper combines a presentation of the fundamentals of the DPG ideas with a review of the ongoing research on theory and applications of the DPG methodology. The scope of the presented theory is restricted to linear problems on Hilbert spaces, but pointers to extensions are provided. Multiple viewpoints to the basic theory are provided. They show that the DPG method is equivalent to a method which minimizes a residual in a dual norm, as well as to a mixed method where one solution component is an approximate error representation function. Being a residual minimization method, the DPG method yields Hermitian positive definite stiffness matrix systems even for non-self-adjoint boundary value problems. Having a built-in error representation, the method has the out-of-the-box feature that it can immediately be used in automatic adaptive algorithms. Contrary to standard Galerkin methods, which are uninformed about test and trial norms, the DPG method must be equipped with a concrete test norm which enters the computations. Of particular interest are variational formulations in which one can tailor the norm to obtain robust stability. Key techniques to rigorously prove convergence of DPG schemes, including construction of Fortin operators, which in the DPG case can be done element by element, are discussed in detail. Pointers to open frontiers are presented.more » « less
-
Abstract We develop a stochastic Galerkin method for a coupled Navier-Stokes-cloud system that models dynamics of warm clouds. Our goal is to explicitly describe the evolution of uncertainties that arise due to unknown input data, such as model parameters and initial or boundary conditions. The developed stochastic Galerkin method combines the space-time approximation obtained by a suitable finite volume method with a spectral-type approximation based on the generalized polynomial chaos expansion in the stochastic space. The resulting numerical scheme yields a second-order accurate approximation in both space and time and exponential convergence in the stochastic space. Our numerical results demonstrate the reliability and robustness of the stochastic Galerkin method. We also use the proposed method to study the behavior of clouds in certain perturbed scenarios, for examples, the ones leading to changes in macroscopic cloud pattern as a shift from hexagonal to rectangular structures.more » « less
-
In this paper, we apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need for the first time to a data-driven operator learning problem related to partial differential equations. An effort is put together to explain the heuristics of, and to improve the efficacy of the attention mechanism. By employing the operator approximation theory in Hilbert spaces, it is demonstrated for the first time that the softmax normalization in the scaled dot-product attention is sufficient but not necessary. Without softmax, the approximation capacity of a linearized Transformer variant can be proved to be comparable to a Petrov-Galerkin projection layer-wise, and the estimate is independent with respect to the sequence length. A new layer normalization scheme mimicking the Petrov-Galerkin projection is proposed to allow a scaling to propagate through attention layers, which helps the model achieve remarkable accuracy in operator learning tasks with unnormalized data. Finally, we present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem. The newly proposed simple attention-based operator learner, Galerkin Transformer, shows significant improvements in both training cost and evaluation accuracy over its softmax-normalized counterparts.more » « less
An official website of the United States government

