skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On the choice of finite element for applications in geodynamics
Abstract. Geodynamical simulations over the past decades have widely beenbuilt on quadrilateral and hexahedral finite elements. For thediscretization of the key Stokes equation describing slow, viscousflow, most codes use either the unstable Q1×P0 element, astabilized version of the equal-order Q1×Q1 element, ormore recently the stable Taylor–Hood element with continuous(Q2×Q1) or discontinuous (Q2×P-1)pressure. However, it is not clear which of these choices isactually the best at accurately simulating “typical” geodynamicsituations. Herein, we provide a systematic comparison of all of theseelements for the first time. We use a series of benchmarks that illuminate differentaspects of the features we consider typical of mantle convectionand geodynamical simulations. We will show in particular that the stabilizedQ1×Q1 element has great difficulty producing accuratesolutions for buoyancy-driven flows – the dominant forcing formantle convection flow – and that the Q1×P0 element istoo unstable and inaccurate in practice. As a consequence, webelieve that the Q2×Q1 and Q2×P-1 elementsprovide the most robust and reliable choice for geodynamical simulations,despite the greater complexity in their implementation and thesubstantially higher computational cost when solving linearsystems.  more » « less
Award ID(s):
1925595 1821210 1835673
PAR ID:
10359238
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Solid Earth
Volume:
13
Issue:
1
ISSN:
1869-9529
Page Range / eLocation ID:
229 to 249
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract. Many geodynamical models are formulated in terms of the Stokes equations that are then coupled to other equations. For the numerical solution of the Stokes equations, geodynamics codes over the past decades have used essentially every finite element that has ever been proposed for the solution of this equation, on both triangular/tetrahedral (“simplex”) and quadrilaterals/hexahedral (“hypercube”) meshes. However, in many and perhaps most cases, the specific choice of element does not seem to have been the result of careful benchmarking efforts but based on implementation efficiency or the implementers' background. In a first part of this paper (Thieulot and Bangerth, 2022), we have provided a comprehensive comparison of the accuracy and efficiency of the most widely used hypercube elements for the Stokes equations. We have done so using a number of benchmarks that illustrate “typical” geodynamic situations, specifically taking into account spatially variable viscosities. Our findings there showed that only Taylor–Hood-type elements with either continuous (Q2×Q1) or discontinuous (Q2×P-1) pressure are able to adequately and efficiently approximate the solution of the Stokes equations. In this, the second part of this work, we extend the comparison to simplex meshes. In particular, we compare triangular Taylor–Hood elements against the MINI element and one often referred to as the “Crouzeix–Raviart” element. We compare these choices against the accuracy obtained on hypercube Taylor–Hood elements with approximately the same computational cost. Our results show that, like on hypercubes, the Taylor–Hood element is substantially more accurate and efficient than the other choices. Our results also indicate that hypercube meshes yield slightly more accurate results than simplex meshes but that the difference is relatively small and likely unimportant given that hypercube meshes often lead to slightly denser (and consequently more expensive) matrices. 
    more » « less
  2. Abstract Eco‐evolutionary experiments are typically conducted in semi‐unnatural controlled settings, such as mesocosms; yet inferences about how evolution and ecology interact in the real world would surely benefit from experiments in natural uncontrolled settings. Opportunities for such experiments are rare but do arise in the context of restoration ecology—where different “types” of a given species can be introduced into different “replicate” locations. Designing such experiments requires wrestling with consequential questions. (Q1) Which specific “types” of a focal species should be introduced to the restoration location? (Q2) How many sources of each type should be used—and should they be mixed together? (Q3) Whichspecificsource populations should be used? (Q4) Which type(s) or population(s) should be introduced into which restoration sites? We recently grappled with these questions when designing an eco‐evolutionary experiment with threespine stickleback (Gasterosteus aculeatus) introduced into nine small lakes and ponds on the Kenai Peninsula in Alaska that required restoration. After considering the options at length, we decided to use benthic versus limnetic ecotypes (Q1) to create a mixed group of colonists from four source populations of each ecotype (Q2), where ecotypes were identified based on trophic morphology (Q3), and were then introduced into nine restoration lakes scaled by lake size (Q4). We hope that outlining the alternatives and resulting choices will make the rationales clear for future studies leveraging our experiment, while also proving useful for investigators considering similar experiments in the future. 
    more » « less
  3. There has been significant study on the sample complexity of testing properties of distributions over large domains. For many properties, it is known that the sample complexity can be substantially smaller than the domain size. For example, over a domain of size n, distinguishing the uniform distribution from distributions that are far from uniform in ℓ1-distance uses only O(n−−√) samples. However, the picture is very different in the presence of arbitrary noise, even when the amount of noise is quite small. In this case, one must distinguish if samples are coming from a distribution that is ϵ-close to uniform from the case where the distribution is (1−ϵ)-far from uniform. The latter task requires nearly linear in n samples (Valiant, 2008; Valiant and Valiant, 2017a). In this work, we present a noise model that on one hand is more tractable for the testing problem, and on the other hand represents a rich class of noise families. In our model, the noisy distribution is a mixture of the original distribution and noise, where the latter is known to the tester either explicitly or via sample access; the form of the noise is also known \emph{a priori}. Focusing on the identity and closeness testing problems leads to the following mixture testing question: Given samples of distributions p,q1,q2, can we test if p is a mixture of q1 and q2? We consider this general question in various scenarios that differ in terms of how the tester can access the distributions, and show that indeed this problem is more tractable. Our results show that the sample complexity of our testers are exactly the same as for the classical non-mixture case. 
    more » « less
  4. There has been significant study on the sample complexity of testing properties of distributions over large domains. For many properties, it is known that the sample complexity can be substantially smaller than the domain size. For example, over a domain of size n, distinguishing the uniform distribution from distributions that are far from uniform in ℓ1-distance uses only O(n−−√) samples. However, the picture is very different in the presence of arbitrary noise, even when the amount of noise is quite small. In this case, one must distinguish if samples are coming from a distribution that is ϵ-close to uniform from the case where the distribution is (1−ϵ)-far from uniform. The latter task requires nearly linear in n samples (Valiant, 2008; Valiant and Valiant, 2017a). In this work, we present a noise model that on one hand is more tractable for the testing problem, and on the other hand represents a rich class of noise families. In our model, the noisy distribution is a mixture of the original distribution and noise, where the latter is known to the tester either explicitly or via sample access; the form of the noise is also known \emph{a priori}. Focusing on the identity and closeness testing problems leads to the following mixture testing question: Given samples of distributions p,q1,q2, can we test if p is a mixture of q1 and q2? We consider this general question in various scenarios that differ in terms of how the tester can access the distributions, and show that indeed this problem is more tractable. Our results show that the sample complexity of our testers are exactly the same as for the classical non-mixture case. 
    more » « less
  5. We present the lowest-order hybridizable discontinuous Galerkin schemes with numerical integration (quadrature), denoted as HDG-P0 for the reaction-diffusion equation and the generalized Stokes equations on conforming simplicial meshes in two- and three-dimensions. Here by lowest order, we mean that the (hybrid) finite element space for the global HDG facet degrees of freedom (DOFs) is the space of piecewise constants on the mesh skeleton. A discontinuous piecewise linear space is used for the approximation of the local primal unknowns. We give the optimal a priori error analysis of the proposed HDG-P0 schemes, which hasn’t appeared in the literature yet for HDG discretizations as far as numerical integration is concerned. Moreover, we propose optimal geometric multigrid preconditioners for the statically condensed HDG-P0 linear systems on conforming simplicial meshes. In both cases, we first establish the equivalence of the statically condensed HDG system with a (slightly modified) nonconforming Crouzeix–Raviart (CR) discretization, where the global (piecewise-constant) HDG finite element space on the mesh skeleton has a natural one-to-one correspondence to the nonconforming CR (piecewise-linear) finite element space that live on the whole mesh. This equivalence then allows us to use the well-established nonconforming geometry multigrid theory to precondition the condensed HDG system. Numerical results in two- and three-dimensions are presented to verify our theoretical findings. 
    more » « less