skip to main content

Title: A posteriori error analysis for Schwarz overlapping domain decomposition methods
Domain decomposition methods are widely used for the numerical solution of partial differential equations on high performance computers. We develop an adjoint-based a posteriori error analysis for both multiplicative and additive overlapping Schwarz domain decomposition methods. The numerical error in a user-specified functional of the solution (quantity of interest) is decomposed into contributions that arise as a result of the finite iteration between the subdomains and from the spatial discretization. The spatial discretization contribution is further decomposed into contributions arising from each subdomain. This decomposition of the numerical error is used to construct a two stage solution strategy that efficiently reduces the error in the quantity of interest by adjusting the relative contributions to the error.
Authors:
; ;
Award ID(s):
1720473 1720402
Publication Date:
NSF-PAR ID:
10276736
Journal Name:
BIT Numerical Mathematics
ISSN:
0006-3835
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper introduces a new finite element numerical method for the solution of partial differential equations on evolving domains. The approach uses a completely Eulerian description of the domain motion.The physical domain is embedded in a triangulated computational domain and can overlap the time-independent background mesh in an arbitrary way. The numerical method is based on finite difference discretizations of time derivatives and a standard geometrically unfitted finite element method with an additional stabilization term in the spatial domain.The performance and analysis of the method rely on the fundamental extension result in Sobolev spaces for functions defined on bounded domains. This paper includes a complete stability and error analysis, which accounts for discretization errors resulting from finite difference and finite element approximations as well as for geometric errors coming from a possible approximate recovery of the physical domain. Several numerical examples illustrate the theory and demonstrate the practical efficiency of the method.
  2. In the past few decades, there has been rapid growth in quantity and variety of healthcare data. These large sets of data are usually high dimensional (e.g. patients, their diagnoses, and medications to treat their diagnoses) and cannot be adequately represented as matrices. Thus, many existing algorithms can not analyze them. To accommodate these high dimensional data, tensor factorization, which can be viewed as a higher-order extension of methods like PCA, has attracted much attention and emerged as a promising solution. However, tensor factorization is a computationally expensive task, and existing methods developed to factor large tensors are not flexible enough for real-world situations. To address this scaling problem more efficiently, we introduce SGranite, a distributed, scalable, and sparse tensor factorization method fit through stochastic gradient descent. SGranite offers three contributions: (1) Scalability: it employs a block partitioning and parallel processing design and thus scales to large tensors, (2) Accuracy: we show that our method can achieve results faster without sacrificing the quality of the tensor decomposition, and (3) FlexibleConstraints: we show our approach can encompass various kinds of constraints including l2 norm, l1 norm, and logistic regularization. We demonstrate SGranite's capabilities in two real-world use cases. In the first,more »we use Google searches for flu-like symptoms to characterize and predict influenza patterns. In the second, we use SGranite to extract clinically interesting sets (i.e., phenotypes) of patients from electronic health records. Through these case studies, we show SGranite has the potential to be used to rapidly characterize, predict, and manage a large multimodal datasets, thereby promising a novel, data-driven solution that can benefit very large segments of the population.« less
  3. In this paper, we consider Maxwell’s equations in linear dispersive media described by a single-pole Lorentz model for electronic polarization. We study two classes of commonly used spatial discretizations: finite difference methods (FD) with arbitrary even order accuracy in space and high spatial order discontinuous Galerkin (DG) finite element methods. Both types of spatial discretizations are coupled with second order semi-implicit leap-frog and implicit trapezoidal temporal schemes. By performing detailed dispersion analysis for the semi-discrete and fully discrete schemes, we obtain rigorous quantification of the dispersion error for Lorentz dispersive dielectrics. In particular, comparisons of dispersion error can be made taking into account the model parameters, and mesh sizes in the design of the two types of schemes. This work is a continuation of our previous research on energy-stable numerical schemes for nonlinear dispersive optical media [6,7]. The results for the numerical dispersion analysis of the reduced linear model, considered in the present paper, can guide us in the optimal choice of discretization parameters for the more complicated and nonlinear models. The numerical dispersion analysis of the fully discrete FD and DG schemes, for the dispersive Maxwell model considered in this paper, clearly indicate the dependence of the numerical dispersionmore »errors on spatial and temporal discretizations, their order of accuracy, mesh discretization parameters and model parameters. The results obtained here cannot be arrived at by considering discretizations of Maxwell’s equations in free space. In particular, our results contrast the advantages and disadvantages of using high order FD or DG schemes and leap-frog or trapezoidal time integrators over different frequency ranges using a variety of measures« less
  4. Abstract

    We formulate the two-dimensional gravity-capillary water wave equations in a spatially quasi-periodic setting and present a numerical study of solutions of the initial value problem. We propose a Fourier pseudo-spectral discretization of the equations of motion in which one-dimensional quasi-periodic functions are represented by two-dimensional periodic functions on a torus. We adopt a conformal mapping formulation and employ a quasi-periodic version of the Hilbert transform to determine the normal velocity of the free surface. Two methods of time-stepping the initial value problem are proposed, an explicit Runge–Kutta (ERK) method and an exponential time-differencing (ETD) scheme. The ETD approach makes use of the small-scale decomposition to eliminate stiffness due to surface tension. We perform a convergence study to compare the accuracy and efficiency of the methods on a traveling wave test problem. We also present an example of a periodic wave profile containing vertical tangent lines that is set in motion with a quasi-periodic velocity potential. As time evolves, each wave peak evolves differently, and only some of them overturn. Beyond water waves, we argue that spatial quasi-periodicity is a natural setting to study the dynamics of linear and nonlinear waves, offering a third option to the usual modeling assumptionmore »that solutions either evolve on a periodic domain or decay at infinity.

    « less
  5. Abstract

    In the last decade, much work in atmospheric science has focused on spatial verification (SV) methods for gridded prediction, which overcome serious disadvantages of pixelwise verification. However, neural networks (NN) in atmospheric science are almost always trained to optimize pixelwise loss functions, even when ultimately assessed with SV methods. This establishes a disconnect between model verification during versus after training. To address this issue, we develop spatially enhanced loss functions (SELF) and demonstrate their use for a real-world problem: predicting the occurrence of thunderstorms (henceforth, “convection”) with NNs. In each SELF we use either a neighborhood filter, which highlights convection at scales larger than a threshold, or a spectral filter (employing Fourier or wavelet decomposition), which is more flexible and highlights convection at scales between two thresholds. We use these filters to spatially enhance common verification scores, such as the Brier score. We train each NN with a different SELF and compare their performance at many scales of convection, from discrete storm cells to tropical cyclones. Among our many findings are that (i) for a low or high risk threshold, the ideal SELF focuses on small or large scales, respectively; (ii) models trained with a pixelwise loss function performmore »surprisingly well; and (iii) nevertheless, models trained with a spectral filter produce much better-calibrated probabilities than a pixelwise model. We provide a general guide to using SELFs, including technical challenges and the final Python code, as well as demonstrating their use for the convection problem. To our knowledge this is the most in-depth guide to SELFs in the geosciences.

    Significance Statement

    Gridded predictions, in which a quantity is predicted at every pixel in space, should be verified with spatially aware methods rather than pixel by pixel. Neural networks (NN), which are often used for gridded prediction, are trained to minimize an error value called the loss function. NN loss functions in atmospheric science are almost always pixelwise, which causes the predictions to miss rare events and contain unrealistic spatial patterns. We use spatial filters to enhance NN loss functions, and we test our novel spatially enhanced loss functions (SELF) on thunderstorm prediction. We find that different SELFs work better for different scales (i.e., different-sized thunderstorm complexes) and that spectral filters, one of the two filter types, produce unexpectedly well calibrated thunderstorm probabilities.

    « less