<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Decoupling Simulation Accuracy from Mesh Quality</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>2018 December</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10080686</idno>
					<idno type="doi"></idno>
					<title level='j'>ACM transactions on graphics</title>
<idno>0730-0301</idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Teseo Schneider</author><author>Yixin Hu</author><author>Jeremie Dumas</author><author>Xifeng Gao</author><author>Daniele Panozzo</author><author>Denis Zorin</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[For a given PDE problem, three main factors affect the accuracy of FEM solutions: basis order, mesh resolution, and mesh element quality. The first two factors are easy to control, while controlling element shape quality is a challenge, with fundamental limitations on what can be achieved. We propose to use p-refinement (increasing element degree) to decouple the approximation error of the finite element method from the domain mesh quality for elliptic PDEs. Our technique produces an accurate solution even on meshes with badly shaped elements, with a slightly higher running time due to the higher cost of high-order elements. We demonstrate that it is able to automatically adapt the basis to badly shaped elements, ensuring an error consistent with high-quality meshing, without any per-mesh parameter tuning. Our construction reduces to traditional fixed-degree FEM methods on high-quality meshes with identical performance. Our construction decreases the burden on meshing algorithms, reducing the need for often expensive mesh optimization and automatically compensates for badly shaped elements, which are present due to boundary con- straints or limitations of current meshing methods. By tackling mesh gen- eration and finite element simulation jointly, we obtain a pipeline that is both more efficient and more robust than combinations of existing state of the art meshing and FEM algorithms.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>M2 M4 M6 M8</head><p>Fig. <ref type="figure">1</ref>. The simulation accuracy is tied to mesh resolution in traditional FE methods (top, Poisson problem reconstructing a quartic function with Dirichlet boundary conditions, the exact solution is shown in the top right), and dramatically decreases as the quality of the mesh worsens, leading to inaccurate and unpredictable results (bottom right, orange). We introduce a new technique to decouple the simulation accuracy from the mesh quality (bottom), which opens the door to accurate, black box finite element analysis pipelines.</p><p>For a given PDE problem, three main factors affect the accuracy of FEM solutions: basis order, mesh resolution, and mesh element quality. The first two factors are easy to control, while controlling element shape quality is a challenge, with fundamental limitations on what can be achieved.</p><p>We propose to use p-refinement (increasing element degree) to decouple the approximation error of the finite element method from the domain mesh quality for elliptic PDEs.</p><p>Our technique produces an accurate solution even on meshes with badly shaped elements, with a slightly higher running time due to the higher cost of high-order elements. We demonstrate that it is able to automatically adapt the basis to badly shaped elements, ensuring an error consistent with high-quality meshing, without any per-mesh parameter tuning. Our construction reduces to traditional fixed-degree FEM methods on high-quality meshes with identical performance.</p><p>Our construction decreases the burden on meshing algorithms, reducing the need for often expensive mesh optimization and automatically compensates for badly shaped elements, which are present due to boundary constraints or limitations of current meshing methods. By tackling mesh generation and finite element simulation jointly, we obtain a pipeline that is both more efficient and more robust than combinations of existing state of the art meshing and FEM algorithms.</p><p>CCS Concepts: &#8226; Computing methodologies &#8594; Modeling and simulation;</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>The numerical solution of partial differential equations is a basic building block in many algorithms in graphics, geometry processing, digital fabrication, mechanical engineering, and many other scientific disciplines. Depending on the application, different levels of accuracy are required, leading to a plethora of methods that vary from inaccurate but real-time, to very accurate and requiring many hours per integration step. Finite element method (FEM), is the most common approach to solve PDEs.</p><p>The accuracy of a FEM solution is determined by three main factors: basis approximation order (usually determined by the degree of the polynomials used), mesh resolution, and mesh quality. Increasing element order (often referred to as p-refinement) and increasing mesh resolution (h-refinement) are both common and widely used ways to increase simulation accuracy. Both can be achieved in a straightforward manner. In contrast, mesh quality is much harder to control, yet its impact on accuracy can be very substantial. Overwhelmingly, FEM basis constructions assume that the input mesh has elements of adequate quality.</p><p>Ensuring good element quality is a particular challenge for computer graphics applications, where coarse discretizations on fixed meshes with low-order elements (particularly sensitive to quality) are preferred, and highly complex shapes need to be handled. Even state-of-the art meshing algorithms cannot guarantee element quality without nontrivial assumptions on the input mesh surface. Moreover, fundamentally, high-quality element shape cannot be always obtained: if the boundary of the mesh needs to be preserved, some elements have to be of poor shape dictated by the boundary geometry (for example, the CAD model shown in Figure <ref type="figure">2</ref>).</p><p>In this paper, we introduce a simple, yet very effective, technique to decouple the simulation accuracy of elliptic PDEs from mesh element quality. That is, we propose a method that produces a solution whose approximation error is largely independent of the presence of badly shaped elements in the input mesh. This is achieved by using an a priori error estimate with the dependence on the mesh quality made explicit. Our technique uses standard Lagrangian elements, and differs from commonly implemented prefinement methods in one critical way: the criterion for determining the degree is based on the input domain mesh, and not on the solution (e.g., deformed mesh in elasticity). It is fully compatible with adaptive methods that additionally perform h-, p-, or hp-refinement based on an a posteriori error estimation in the solution.</p><p>Our approach reduces the burden of achieving high quality for all elements in the meshing phase, and allows us to obtain highaccuracy even on models where a high quality mesh is impossible to construct due to the presence of sharp features.</p><p>We validate our technique by testing it on two large datasets (each contains 9 809 models), demonstrating that a completely automated, controlled accuracy, analysis pipeline is enabled by combining our technique with robust mesh generators <ref type="bibr">[Shewchuk 1996;</ref><ref type="bibr">Hu et al. 2018</ref>]. We run our experiments with both state-of-the-art direct and iterative solvers, demonstrating that our technique benefits both small and large scale simulation scenarios, and that the introduction of higher-order elements does not result in solver problems. Our technique is a step toward designing black-box analysis Fig. <ref type="figure">2</ref>. A model with two adjoining feature lines, forming a zero angle: it is impossible to create a high-quality tetrahedral mesh of its interior, since mesh refinement with samples on the surface results in elements with angles approaching zero at the tip. pipelines for shape optimization, which rely on running millions of simulations to compute shape derivatives, since it provides unprecedented robustness to meshing quality, at the cost of a minor increase of running times. To foster its adoption and further research, we attach a self-contained reference implementation, which we used to generate all the results reported in this paper, as well as the test datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>The Finite Elements Method (FEM) is, by far, the most common approach to discretize PDEs in science and engineering applications. FEM has been introduced to computer graphics by <ref type="bibr">[Terzopoulos et al. 1987]</ref>, and it has been widely used for physically-based simulation, geometric modeling, geometry processing, and computational fabrication.</p><p>Graphics Overview. A large fraction of computer graphics techniques rely on solving a PDE as a submodule, the most common examples being diffusion equations, elastic deformations, and their variations (elastoplasticity and viscoelasticity). We briefly review some examples of work in computer graphics that motivated our method. Due to the vast scale of applications of FEM, we do not intend this list to be comprehensive.</p><p>In geometric modeling, deformation methods heavily rely on FE, including construction of bases for efficient real-time methods <ref type="bibr">[Sorkine 2005;</ref><ref type="bibr">Jacobson et al. 2011;</ref><ref type="bibr">Wang et al. 2015</ref>]. In geometry processing, FEM is used in surface/volume parametrization <ref type="bibr">[Hormann et al. 2008]</ref>, surface reconstruction <ref type="bibr">[Kazhdan and Hoppe 2013]</ref>, vector field design <ref type="bibr">[Vaxman et al. 2016]</ref>, quadrangulation/remeshing <ref type="bibr">[Bommes et al. 2012;</ref><ref type="bibr">Ling et al. 2014]</ref>, image retargeting <ref type="bibr">[Kaufmann et al. 2013]</ref>, and surface/volumetric texturing <ref type="bibr">[Orzan et al. 2008;</ref><ref type="bibr">Takayama et al. 2010]</ref>. In computational fabrication, elasticity FEM is essential for predicting properties of printed shapes (e.g., checking structural soundness <ref type="bibr">[Zhou et al. 2013]</ref>) and for shape optimization <ref type="bibr">[Panetta et al. 2015;</ref><ref type="bibr">Musialski et al. 2016;</ref><ref type="bibr">Zhu et al. 2017]</ref>.</p><p>Several desirable features are common to many of these applications: (1) ability to handle complex shapes robustly and automatically, and (2) maximal performance for an acceptable level of error.</p><p>As a consequence, triangular or tetrahedral elements of minimal degree (linear) are most commonly used, since they can be robustly generated <ref type="bibr">[Shewchuk 1996;</ref><ref type="bibr">Hu et al. 2018]</ref> and they lead to efficient FE algorithms. However, these elements have widely known limitations, and require careful algorithmic choices to lead to stable and efficient simulations.</p><p>FEM Fundamentals. Our approach is based on classical techniques from engineering and numerical analysis literature. In particular, we use interpolation error estimates to obtain the error dependence on the shape and size of an element for a general element order. Such estimates in the general case were derived initially in <ref type="bibr">[Ciarlet and Raviart 1972]</ref>, with additional refinements later, most recently in <ref type="bibr">[Kobayashi and Tsuchiya 2016]</ref>. These estimates show that the error is strongly dependent on the element shape. Almost all existing FE methods for tetrahedral meshing, adaptive and nonadaptive, require a bound on mesh quality to guarantee a bound on the error, which is challenging (or impossible, see Figure <ref type="figure">2</ref>) to obtain in practice. Our approach parsimoniously uses high-order finite elements <ref type="bibr">[Schwab 1998</ref>] to eliminate this requirement.</p><p>2D and 3D Meshing. Tetrahedral meshing is a central and nonfully solved problem in geometry processing and computational geometry. The inconsistency between what state-of-the art meshing methods can guarantee, and what is required by standard FEM discretizations is a motivation for our work. A number of increasingly robust and efficient meshing algorithms were developed for 2D and 3D <ref type="bibr">[Shewchuk 1996;</ref><ref type="bibr">Dobrzynski 2012;</ref><ref type="bibr">Stuart et al. 2013;</ref><ref type="bibr">Si 2015;</ref><ref type="bibr">Jamin et al. 2015;</ref><ref type="bibr">Hu et al. 2018]</ref>. All these methods combine an initial mesh construction with subsequent mesh improvement, that usually dominates the running time. Despite the vast academic and industrial interest, no known 3D meshing methods guarantees a bound on the maximal edge to in-radius ratio required by FEM methods, for a general input surface. While such guarantees exist for triangle meshing <ref type="bibr">[Shewchuk 1996</ref>], they still cannot be enforced if the boundary has to be preserved exactly (Figure <ref type="figure">2</ref>).</p><p>Adaptive FEM in Graphics. Adaptive versions of FEM, which are extensively studied in the engineering literature, have a number of important applications in graphics, starting with <ref type="bibr">[Wu et al. 2001]</ref> (see <ref type="bibr">[Manteaux et al. 2017</ref>] for a recent overview). These methods exploit almost exclusively h-refinement (i.e., the size of the elements changes, not their order), and are used for problems involving large deformations, including the emergence of geometric features or discontinuities. Examples include tearing and cracking <ref type="bibr">[Pfaff et al. 2014]</ref>, resolving features on cloth <ref type="bibr">[Simnett et al. 2009]</ref>, cutting <ref type="bibr">[Seiler et al. 2011]</ref>, image retargeting <ref type="bibr">[Kaufmann et al. 2013]</ref>, elastoplasticity <ref type="bibr">[Wicke et al. 2010;</ref><ref type="bibr">Piovar&#269;i et al. 2016]</ref>, and viscoelasticity <ref type="bibr">[Wojtan and Turk 2008]</ref>. At the same time, a sampling of recent works using FEM, <ref type="bibr">[Xu and Barbi&#269; 2016;</ref><ref type="bibr">Kim et al. 2017;</ref><ref type="bibr">Liu et al. 2017;</ref><ref type="bibr">Wang et al. 2017]</ref> shows that adaptivity is commonly avoided whenever possible, as simulation on a fixed mesh offers substantial benefits, especially for interactive applications, enabling precomputation and avoiding resampling of quantities stored on the mesh (e.g., skinning weights, UV coordinates, colors).</p><p>The closest works to ours in the graphics literature are <ref type="bibr">[Bargteil and Cohen 2014]</ref>, <ref type="bibr">[Edwards and Bridson 2014]</ref>, and <ref type="bibr">[Kaufmann et al. 2013</ref>] which add quadratic elements to increase the accuracy of elasticity simulations, fluid simulations, and image deformations respectively. Similarly to all other adaptive methods, their adaptivity is done a posteriori, that is, by using an estimate of the solution error computed after each iteration. Basis Refinement <ref type="bibr">[Grinspun et al. 2002]</ref> can be viewed as a general form of refinement including both h and p methods.</p><p>While we present our method in the context of static meshes, it is fully compatible with adaptive refinement/coarsening, which is orthogonal to our contribution. An important distinction is that in our case elements are adapted to the material mesh, not to its image under deformation: while all existing methods assume that the initial mesh satisfies typical regularity assumptions, our approach lifts this assumption.</p><p>Adaptive FEM in Engineering. Different types of adaptive FEM refinement were developed, based on a posteriori error estimates; the original fundamentals of p and hp refinement are summarized in <ref type="bibr">[Babu&#353;ka and Suri 1994]</ref>. A practical parallel implementation is available, e.g., in Deal.II library <ref type="bibr">[Bangerth et al. 2007;</ref><ref type="bibr">Alzetta et al. 2018]</ref>. Ten distinct methods for adaptive refinement are summarized and compared in <ref type="bibr">[Mitchell and McClain 2014]</ref>, using a set of test problems from <ref type="bibr">[Mitchell 2013</ref>]. Invariably, these methods assume that the material mesh has sufficient regularity, and adaptation is based on either forces, boundary conditions, or a posteriori estimates of the solution. Pure p-refinement was demonstrated to be useful for a number of engineering problems (e.g., <ref type="bibr">[D&#252;ster et al. 2003</ref>], <ref type="bibr">[Franke et al. 2008]</ref>) although is known to be suboptimal for singular and close to singular solutions. Our method increases the basis degree using an a priori error estimate based on the geometry of the input mesh, for the purpose of decoupling simulation errors from mesh quality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">FINITE ELEMENT NOTATION AND SETUP</head><p>The general form of a scalar second-order elliptic PDE on a given domain &#8486; is</p><p>x &#8712; &#8486;, subject to</p><p>where &#8706;&#8486; D is the part of the boundary where the values of the solution u are specified (Dirichlet) and &#8706;&#8486; N is the part boundary where the normal derivative of the function u is specified (Neumann). We assume that &#8486; is a polygonal/polyhedral domain, although we do not expect any significant changes for curved boundaries.</p><p>The most common PDE in this class is the Poisson's equation &#8710;u = f (x). A second-order elliptic PDE with a vector-valued unknown function u (e.g., elasticity) is defined similarly, but boundary conditions on derivatives may take a more complex form compared to Neumann (e.g., conditions on surface force densities). We use the Poisson, linear elasticity, and non-linear elasticity equations for evaluation, which cover the most common elliptic equations present in graphics applications. We do not consider higher order PDEs (most importantly plates and shells), however, the approach we describe naturally extends to these cases. We set up the notation using the Poisson equation as an example.</p><p>An essential concept in FEM theory is the space H m (&#8486;) of functions on &#8486; with integrable m-th weak derivatives and its associated m-norm &#8741; &#8226; &#8741; m , see Appendix A for the definitions.</p><p>Weak Form. The FEM is based on discretizing the weak form of the PDE. For the Poisson equation , it has the form:</p><p>which must holds for any test function v in H 1 (&#8486;), vanishing on the boundary. For a twice-differentiable function u, this equation is obtained by integrating by parts &#8747; &#8710;uv = &#8747; f v, so it is equivalent to the original equation. The weak form requires weaker assumptions on function differentiability, which makes it easier to reason about lower-order approximations (e.g., piecewise-linear) for the solution, and establish their convergence.</p><p>Basis Functions. In a finite element discretization the solution u is approximated using the basis functions &#981; i , i = 1, . . . , n, as u h = &#8721; n i=1 u i &#981; i where u i are the unknown coefficients.</p><p>As a basis, we use interpolatory continuous polynomial functions (Lagrange basis) defined on triangulations of &#8486; in 2D and tetrahedralizations in 3D. In this settings, the parameter h in u h indicates the largest edge length present in the tessellation. The quality of approximation of a basis depends on the quality of tessellation, which can be quantified, per-element, by the shape parameter</p><p>where h is the maximum edge length of the element and &#961; is the inscribed sphere radius. Usually, the shape parameter is considered to be bounded from below by a constant K independent of h, for admissible families of tessellations. Such families of tessellations are referred to as regular. In contrast to the standard approach, we do not assume regularity of tessellations.</p><p>For Lagrange basis, each function &#981; i is associated with a node</p><p>where it has value one, while being zero at all other nodes. As a consequence, coefficients u i are just the values of the function u h at the nodes. The number of nodes (and corresponding basis functions) and their position is directly correlated to the order of the basis, see Figure <ref type="figure">3</ref>. For instance, for the piecewise linear basis (P 1 ), the nodes corresponds with the mesh's vertices (Appendix B for higher degrees).</p><p>It is useful to view the basis functions as being assembled from local basis functions defined on each element separately; this approach is helpful for defining a basis with polynomial degree depending on the element. From this point of view, each element has a separate set of local nodes, and an interpolatory polynomial function is associated with each local node. Elements sharing a vertex, edge or a face, have coinciding local nodes, which are viewed as a single global node, with an associated single global basis function whose restriction to each element containing the node is equal to the local polynomial basis function. If a node is on a boundary between elements, but is not present in one of them (e.g., if a quadratic element is adjacent to a linear one, this holds for nodes at edge midlpoints) the relationship between global and local basis function coefficients has a more complex form.</p><p>Discretized Equations. By using u h instead of u, and &#981; i , i = 1 . . . n as test function v into (1), we obtain a n &#215; n linear system of the form Au = b, where A is referred to as stiffness matrix, b is obtained from f , and u is the vector of unknown coefficients u i . Note that if the PDE is not linear this discretization leads to a non-linear system, which is usually solved by using Newton's method. For the Poisson equation, the entries of A are given by</p><p>Similarly, right-hand side entries are b j = &#8747; &#8486; &#981; j f . Dirichlet boundary conditions are treated as fixed values u i at the boundary nodes. The Neumann boundary conditions are imposed by adding</p><p>for any node j on the boundary &#8706;&#8486; N .</p><p>If we replace &#981; i with linear hat functions (P 1 basis, Appendix B), this formula reduces to the commonly used cotangent Laplacian discrete operator <ref type="bibr">[Pinkall and Polthier 1993]</ref>.</p><p>Geometric Mapping. The geometric mapping &#1076; is used for computing integrals in a i j and b i . These integrals need to be computed on every element E (triangle or tetrahedron) which all have different shapes. The integration is performed element by element, with</p><p>, where the summation is over all elements where both basis functions i and j do not vanish. Each element term a E i j is computed on a reference element &#274; (in our case, a regular right-angle triangle/tetrahedron positioned at the origin, with edges along axes) through a change of variables. Let G be the Jacobian of the mapping &#1076;, then</p><p>where &#966; are the bases defined on the reference element &#274;. We use the simplest piecewise affine geometric map &#1076; for each element:</p><p>, where x &#8712; &#274;, B is a matrix and c a constant, both depending on E.</p><p>Quadrature. All integrals are computed numerically by means of quadrature points and weights, which translates the integrals into weighted sums. The quadrature needs to be of sufficiently high order for the error of the method to not be affected. Although it is more expensive, we use quadratures that integrate polynomials of degree k exactly for degree k elements <ref type="bibr">[Xiao and Gimbutas 2010]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">A PRIORI ESTIMATE</head><p>One important limitation of the standard FE construction is that the basis depends on the shape and size of the elements of the discrete mesh: as shown in Figure <ref type="figure">1</ref>, numerical and approximation errors might lead to inaccurate solutions that are not representative of the actual solutions of the underlying equation.</p><p>We propose a modification to the standard discretization to take into account the geometrical quality of the mesh during the basis construction, compensating for the potential error by increasing the degree of the basis. Our algorithm is divided into 3 stages: (1) we use an a priori error estimate, including explicit dependence on the shape of the element, to decide the polynomial degree of the basis assigned to each triangle, (2) we propagate the polynomial degrees to the adjacent elements of the mesh to ensure the desired polynomial reproduction order on the badly shaped elements, and</p><p>(3) we construct a basis satisfying a necessary set of constraints on degrees of freedom to ensure C 0 continuity between elements of different orders.</p><p>These modifications are easy to add to existing FEM systems, and effectively decouple the mesh quality from the accuracy, as demonstrated in Section 6, while maintaining, for reasonable meshes, the same performance as the lowest degree elements used.</p><p>Overview. Let k be the user-specified minimal degree, which will likely be 1 (i.e., linear elements) for many graphics applications, and &#293; be the average edge length. Our strategy aims at matching the error of every element to the error of a "perfectly shaped" element by increasing the degree k of its basis. That is, by ensuring that the error of every element is lower than the one of a regular triangle or tetrahedron of degree k and edge length &#293;. Therefore, we need an error predictor relative to perfect shape for the solution to determine the element's degree.</p><p>Let &#915;(k, E)&#8741;u &#8741; k +1 be an error bound for the FE solution, where &#8741;u &#8741; k +1 is an H k +1 -norm of the exact solution (Appendix A). The error predictor &#915;(k, E) depends on the degree of the basis k and the geometry of the element E, but not on u. For obtaining k, we solve</p><p>where &#202; is the regular equilateral triangle or tetrahedron with uniform side &#293;. We will now explain how we choose the error predictor &#915; for linear and non-linear elliptic PDEs.</p><p>Standard A priori Error Estimate Summary. The error-estimates for degree k conforming triangular and tetrahedral finite elements for second-order linear and non-linear elliptic problems typically have the form</p><p>where u is the exact solution, u h is the solution of the finite element system, h is the maximal edge length in the mesh, and C is a constant independent of h, u, and u h . Note that &#8741;uu h &#8741; 0 is the L 2 -norm of the error. This estimate holds for sufficiently regular solutions and the order of convergence may decrease if the solution has a singularity. C is dependent on a range of other factors, including the PDE coefficients, and, most importantly for our purposes, the shape of the elements. A similar estimate can be derived for the error of solution gradient &#8741;u -u h &#8741; 1 , with the power of h decreased by one. These estimates are classically obtained under the regularity assumption, that is, by assuming that the shape quality &#963; E (Eq. ( <ref type="formula">2</ref>)) is bounded by a constant K independent of h, for all elements E. That is, the minimal quality of the elements is bounded by K, an assumption that rarely holds on real-world meshes. We want to derive estimates avoiding this assumption and explicitly keeping their dependence on element quality.</p><p>Shape-Dependent Error Estimate. While this estimate is based on a textbook derivation (e.g., <ref type="bibr">[Ciarlet 1976</ref>]) due to ubiquitous mesh regularity assumptions, typically the explicit dependence on the shape parameter &#963; is lost. We briefly review the steps leading to the appearance of the shape factor in the estimate below, and, for completeness, present a full derivation in a special case in Appendix C. We also note that while the convergence order of an estimate may decrease, as a consequence of solution singularities, we have experimentally observed the shape dependence to remain similar (see the singular solution experiments in Section 6). The error bound of the form ( <ref type="formula">4</ref>) is usually obtained in 3 steps (see, e.g., <ref type="bibr">[Ciarlet 1976</ref>]):</p><p>&#8226; Step 1. First, by a general estimate (Cea's lemma), the error in the FE solution is bounded by the best possible approximation error in the FE basis, in H 1 norm. The fact that the H 1 norm is used is important, as it is impossible to make a similar estimate in the L 2 norm for the PDEs we consider.</p><p>&#8226;</p><p>Step 2. The H 1 best possible H 1 approximation error is estimated from above by the H 1 error of interpolation by the FE basis (as it is clearly not better than the best possible).</p><p>&#8226; Step 3. The L 2 error in the solution is bounded by the H 1 error, scaled by the element size h. This is done using the Aubin-Nitsche lemma, which requires using an interpolation error estimate one more time.</p><p>The first estimate is quite general, and involves constants dependent on the PDE, but not on the element shapes; the constants arising at the next two steps involve interpolation errors, including the shape factor. At each step, an additional factor 1/&#963; appears in the formula (Appendix C), leading to the following L 2 error estimate</p><p>where &#963; E is the shape parameter and h E is the maximal edge length of an element E. Figure <ref type="figure">4</ref> shows the relation between &#963; and the error for the example in Figure <ref type="figure">1</ref>, as Eq. ( <ref type="formula">5</ref>) confirms that the L 2 error is proportional to &#963; 2 , while the H 1 error is proportional to &#963; .</p><p>Formula for choosing k. Based on Eq. ( <ref type="formula">5</ref>) and (4), we define the per-element function &#915;: Plugging ( <ref type="formula">6</ref>) into (3) (and adding a user-provided element-independent tolerance B, we obtain:</p><p>where &#293; is the average edge length of the mesh, &#963; = &#8730; 6/12 in 3D and &#963; = &#8730; 3/6 in 2D (shape parameter for a perfectly regular element). By solving for the degree k of an element</p><p>Comparison with Interpolation Error. Before proceeding, we compare the solution error (5) to the interpolation error of the Lagrange basis, which is commonly used to evaluate tesselation quality. For instance, <ref type="bibr">[Shewchuk 2002</ref>] describes accurate formulas for both L 2 and H 1 interpolation errors for piecewise-linear elements.</p><p>This demonstrates that the gradient (equivalently H 1 ) error is inversely proportional to the shape parameter &#963; , while L 2 error depends only on h, not on &#963; .</p><p>Specifically, for an element E of degree k, the interpolation errors have the form</p><p>where &#8741; &#8226; &#8741; m, E is the norm restricted to the element E, and the constants are independent from the element size or shape and k.</p><p>Note the contrast to (5): the solution error grows faster with deterioration in shape, compared to the gradient error, consistent with Figure <ref type="figure">4</ref>. This distinction is confirmed in practice: badly shaped elements lead to large solution errors, not just large errors in solution derivatives and related quantities (e.g., stresses in elasticity). For this reason, the interpolation error is not necessarily the best way to determine the element quantity. In contrast, (5) measures the effect on shape more directly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">TECHNICAL AND IMPLEMENTATION DETAILS</head><p>Equipped with our a priori estimate, we briefly describe how to design a FE method that uses it to decouple simulation accuracy from element quality, by assigning degrees to elements based on their shape quality. Propagation of Tags. While the degrees of freedom of P 1 elements corresponds to corners of the mesh only, higher order elements have DOFs on edges/faces (P 2 ), and on their interior (P 3 and higher). To ensure C 0 continuity on edges/faces, it is necessary to impose constraints on some of the degrees of freedom on shared edges/faces of the elements. However, locking degrees of freedoms reduces the representation power of the basis on the higher-order element: an example is shown in Figure <ref type="figure">6</ref>, where a P 3 element touches a P 1 element. As a result, to ensure C 0 continuity, the functions in the space spanned by the basis have to be linear on the shared edge of the elements, that is, not all cubic functions on P 3 element can be reproduced. To ensure that the basis for an element E has a complete P k e basis, which is required for the H 1 interpolation error to work we need to ensure that all its edge/face neighbors have at least degree P k e .</p><p>To enforce this condition, we do a pass over all elements, in ascending order by degree, and for each element E we navigate its edge and face neighbors and set their order to k e if it is strictly smaller than k e (Figure <ref type="figure">5</ref>).</p><p>C 0 Basis Construction. To achieve the optimal convergence order of k + 1, three conditions needs to be satisfied <ref type="bibr">[Braess 2007</ref>]: (a) polynomial reproduction up to degree k, (b) quadrature accuracy, and (c) consistency. (a) is satisfied by construction in our setting since we only use P k elements, with k &#8805; k. (b) can be easily satisfied by using a proper quadrature order that integrates polynomials of degree k exactly. Consistency (c) is ensured by using conforming, that is, C 0 elements.</p><p>To guarantee continuity of the basis, we mark as interface element all elements which have at least one neighbor with a smaller order, and mark as regular the remaining ones. For regular elements, the basis and nodes construction follows the standard finite elements construction <ref type="bibr">[Braess 2007]</ref>.</p><p>For the interface between elements of different orders, we need to introduce a set of constraints to ensure continuity. We observe that, for the specific case of P basis (Appendix B), a useful property holds: any polynomial of order k 1 &lt; k 2 can be viewed as a polynomial of order k 2 ; therefore, any function in the span of the P k 1 basis can be expressed in P k 2 basis. In particular, this holds on interface faces/edges of elements E 1 and E 2 of different orders k 1 and k 2 .</p><p>As k 1 &lt; k 2 , some of the nodes of E 2 at the interface with E 1 are not present in E 1 (but, for the Lagrange basis, all nodes of E 1 are present in E 2 ) and, as a consequence, if a basis is associated with such a node, it will be discontinuous, as there is no matching polynomial local basis on the other side. For this reason, we eliminate the degrees of freedom at these nodes, and require that the unknowns at these nodes are constrained to the values of the lower-order polynomial interpolant of order k 1 , computed from the unknowns at shared nodes of E 1 and E 2 . This ensures continuity at the interface.</p><p>As the positions of the nodes in barycentric coordinates are independent of the element geometry, these interpolant values are linear combinations of known values with coefficients depending only on k 1 and k 2 . The resulting constraints can be either added to the stiffness matrix, or used to eliminate the dependent unknowns from the system (we choose the latter in our implementation).</p><p>For the two-dimensional example in Figure <ref type="figure">6</ref>, the constraint, e.g., for the local DOFs n 16 at the edge (in gray) that P 3 element E 1 shares with the P 1 element E 2 , is obtained as follows: first evaluate the three lower-order P 1 bases &#981; 21 = x + y -1, &#981; 22 = 1y, and &#981; 23 = 1x at the node n 16 giving the constraint v 16 = 1 3 v 4 + 2 3 v 2 for the local node n 16 of the P 3 element that we eliminate. All local degrees of freedom on E 1 other than those at n 16 and n 17 correspond directly to the global degrees of freedom, so the restriction of global interpolant to E 1 has the form</p><p>Collecting the terms for each remaining unknown, we get the expressions for the restriction of the global basis functions at nodes n 2 and n 4 to E 1 :</p><p>) and a similar expression for &#981; G 4 . Note that, restricted to the common edge of E 1 and E 2 , these functions are linear.</p><p>We apply this construction for all interface elements in increasing order of degree: in this way, we are guaranteed to always have all the information that we need to define the continuity constraints. This holds since for an element of degree k we need to know the explicit representation of the boundary only for the neighbours of degree lower than k.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">EVALUATION</head><p>We implemented our algorithm in C++, using Eigen <ref type="bibr">[Guennebaud et al. 2010]</ref> for linear algebra routines and PARDISO <ref type="bibr">[De Coninck et al. 2016;</ref><ref type="bibr">Verbosio et al. 2017;</ref><ref type="bibr">Kourounis et al. 2018</ref>] (figures 7, 8, 11, 21 and Table <ref type="table">1</ref>) and HYPRE <ref type="bibr">[Falgout and Yang 2002] (figures 9, 10, 16, 17, and 20)</ref> for solving sparse linear systems. The source code of our reference implementation is available in the additional material. The experiments were performed on cluster nodes with 2 Xeon E5-2690v4 2.6GHz CPUs and 250GB memory, running 8 processes in parallel on each node, each with 60GB of reserved memory. h-efficiency. In most cases, we measure the error in units of &#293;2 or &#293;, eliminating the resolution-dependent part, thus allowing us to measure the efficiency of the dicretization for a fixed resolution h. This approach allows us to compare the results in a uniform and consistent way, since our datasets contain meshes of different resolution. To measure h-efficiency for the solution error and gradient error we use</p><p>Most tests are done for problems with known exact solutions u, that is, the boundary conditions and the right-hand side of the problem are obtained analytically from a known function u.</p><p>Mesh Independence. Our technique effectively decouples the accuracy of the solution from the mesh used, as we demonstrate in a series of examples of increasing complexity.</p><p>(1) We bend a discretized 2D bar in Figure <ref type="figure">7</ref> and a 3D bar in Figure <ref type="figure">8</ref>, by applying a tangential force to the top segment/plane. Different discretizations produce visibly different results, due to the approximation errors in the anisotropic elements in the center of the bar. When our technique is applied, the solution is similar for all meshes.</p><p>(2) We demonstrate that our method produces indistinguishable solutions for a Poisson problem on two sequences of tetrahedral meshes generated with TetWild <ref type="bibr">[Hu et al. 2018</ref>] and progressively P 1 Ours Fig. <ref type="figure">8</ref>. Neo-Hookean elasticity example on a 3D bar, the layout is identical to Figure <ref type="figure">7</ref>. Similarly to the 2D case, standard P 1 elements exhibit a major displacement error (highlighted by an outline of the reference solution), while our technique is not affected by the mesh quality.</p><p>damaged (Figure <ref type="figure">9</ref>). The meshes are damaged by adding noise of increasing strength to the vertex positions, moving them one at a time and disallowing operations that flip any tetrahedra. We also show the influence of the parameter B on the accuracy. Lower values of B introduce more high order elements, resulting in a lower error but a larger number of degrees of freedom. On the other hand, by allowing for a looser bound on acceptable error, we get fewer high order elements and higher error. We use the Franke test function <ref type="bibr">[Franke 1979</ref>] as the analytic solution. Note that as the quality of the mesh improves, our method introduces fewer high-order elements, since they are not required. The assembly and construction time also decreases accordingly. While the time required by uniform elements stays constant, their errors follow the quality of the discretization (Figure <ref type="figure">1</ref>), while ours is not affected by the bad elements.</p><p>(3) The insensitivity to mesh quality is also demonstrated for the same Poisson problem in Figure <ref type="figure">10</ref>, by using meshes created with 5 different tetrahedral meshing algorithms to mesh 1 137 surface meshes that could be meshed by all of them: in all cases, the E L 2 h-efficiency is better (lower values) and stable when our technique is used, while it heavily varies with standard linear elements (top). The running time slightly increases with our technique (bottom) due to the additional DOFand increase in the quadrature order.</p><p>Large Scale Validation. To validate the robustness and effectiveness of our method, we use it to solve a Poisson problem on a large collection of 19 618 3D models (Figure <ref type="figure">14</ref>). We created two datasets (optimized HQ and semi-optimized LQ) by running TetWild <ref type="bibr">[Hu et al. 2018</ref>] on the Thingi10k dataset <ref type="bibr">[Zhou and Jacobson 2016]</ref>, with two different stopping criteria (maximum conformal AMIPS 3D energy reaches 10 and 300), limiting to one-hour running time and 16GB of memory.</p><p>Ideally, a PDE solver should operate in a "black-box" manner, accepting input geometries and boundary conditions with few restrictions, and producing a solution with guaranteed error. We view the combination of the TetWild mesher and our FEM discretization as a step towards such a solver, and provide an initial evaluation of the complete pipeline. We report the timings and h-efficiencies in Figure <ref type="figure">16</ref>. The running time of our method is only slightly higher then P 1 , and the efficiency is stable for the entire dataset when our technique is used. This effect is particularly noticeable for the dataset with lower quality <ref type="bibr">(right)</ref>. Some examples where the difference is particularly noticeable are shown in Figure <ref type="figure">11</ref>. We also compare the performance of our method compared with full P k elements for k = 1, . . . , 4 on 300 randomly chosen high and low quality meshes (Figure <ref type="figure">17</ref>). As expected, for the user-specified degree k = 1, our method improves the efficiency for P 1 while having a similar running times as linear elements. Similarly, higherorder elements have a better h-efficiency but the timings are significantly higher.</p><p>We show an accumulation plot of the combined timings of the meshing and analysis stages in Figure <ref type="figure">15</ref>: the pipeline that uses a lower quality threshold for the meshing quality is significantly faster, while obtaining a comparable h-efficiency (Figure <ref type="figure">16</ref>).</p><p>The difference in timings follows from our strategy: our criterion has the tendency of choosing higher degree for low quality meshes, while for high quality meshes almost all elements are P 1 (Figure <ref type="figure">12</ref>). Our method increases by 6 times the number of DOFs for the lowquality meshes dataset (on average) versus 2 times for the highquality one (Figure <ref type="figure">13</ref>).</p><p>Singular Solutions. Increasing the mesh density (while keeping quality constant) steadily improves the accuracy of the solution. However, the same does not hold for increasing the degree of the basis: the solution might not be smooth around singularities, and thus not well approximated by the high-order, smooth basis: in fact, it may decrease due to poor behavior of higher-order polynomials in such cases. We experimentally show that our technique is not negatively affected by singularities on the boundary, by testing it on a representative problem from the NIST collection of 2D elliptic problems, which was introduced specifically to compare adaptive algorithms <ref type="bibr">[Mitchell 2013;</ref><ref type="bibr">Mitchell and McClain 2014]</ref> (Table <ref type="table">1</ref>).</p><p>Conditioning of the System. Increasing the degree of the basis usually has a negative effect on the condition number of the stiffness matrix <ref type="bibr">[Babuska et al. 1991</ref>]. While this is not particularly relevant for direct solvers, it might affect iterative linear solvers. To assess the effect of the conditioning on our technique, we compute TetWild <ref type="bibr">[Hu et al. 2018]</ref> TetGen <ref type="bibr">[Si 2015</ref>] CGAL <ref type="bibr">[Jamin et al. 2015</ref>   Table <ref type="table">1</ref>. Singularities on the boundary have little impact on the accuracy of our method, as demonstrated in the case of the reentrant corner included in the collection of 2D elliptic problems <ref type="bibr">[Mitchell 2013</ref>]. The problem corresponds to Laplace's equation with Dirichlet boundary conditions on a domain with a reentrant corner of angle &#969;. The analytical solution used is f (x, y) = r &#960; &#969; sin ( &#960; &#969; &#952; ) , where (r, &#952; ) are the polar coordinates of (x, y).</p><p>the condition number of the same mesh, with low and high quality, under uniform refinement <ref type="bibr">[Ong 1994</ref>] and compare it to standard linear elements (Figure <ref type="figure">18</ref>). We remark that while our method has an higher conditioning (3.8 for the high quality and 5.4 for low quality), the trend is the same. Moreover, we used the HYPRE iterative solver (with algebraic multigrid preconditioning) for our large scale dataset (Figure <ref type="figure">14</ref>) and compare the number of interations needed to reach a residual error of 1e-10 (Figure <ref type="figure">19</ref>), as expected the lower quality meshes requires more iterations than standard P 1 elements because of the large percentage of higher order elements (Figure <ref type="figure">12</ref>). Note that the significant reduction in the number of iterations for pure linear element is due to the fact that our low quality meshes have less than half DOFs compared to the high quality ones (Figure <ref type="figure">12</ref> bottom). Although we noticed an increase in the number of iterations and timings for our method compared to P 1 (likely due to the degradation of the conditioning of the system for high-order elements and the corresponding increase of degrees of freedom), it still gives a considerable timing advantage when a full pipeline including mesh optimization is considered (Figure <ref type="figure">15</ref>). By comparing the detailed timings on 100 random meshes (Figure <ref type="figure">20</ref>) we see that most of the overhead of our method comes from the basis construction. This observation shows that our technique is a good fit for problems solved with algebraic multigrid techniques, and it would be  Fig. <ref type="figure">13</ref>. Histograms of the ratio of the number of degree of freedom of our method over linear P 1 elements for our 3D dataset with high quality (left) and low quality (right).</p><p>interesting to conduct similar tests with geometric multigrid techniques which are used in graphics for both unstructured <ref type="bibr">[Aksoylu et al. 2005;</ref><ref type="bibr">Shi et al. 2006;</ref><ref type="bibr">Botsch et al. 2006;</ref><ref type="bibr">Jakob et al. 2015</ref>] and structured meshes <ref type="bibr">[Zhu et al. 2010;</ref><ref type="bibr">McAdams et al. 2011</ref>].</p><p>Linear Elasticity. In addition to the non-linear elasticity simulations shown in figures 7 and 8, we show a sampler of linear elasticity simulations with Dirichlet boundary conditions in Figure <ref type="figure">21</ref>. Note that, the h-accuracy drops to negligible levels with our technique, whereas a standard P 1 discretization would not be usable for these models.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">CONCLUDING REMARKS</head><p>We introduced a simple and effective technique to decouple the simulation error from mesh quality with a priori analysis of the mesh geometry. The technique has two downsides: (1) it increases the computational cost, and (2) it requires a somewhat more complex basis construction to set up the stiffness matrix and rhs. We argue that the minor increase in runtime is well worth the benefit of not having to consider or optimize mesh quality, especially when only a few solves need to be done for a specific mesh. To ameliorate the second downside, we provide a self-contained, open-source, C++ reference implementation of our technique, which will allow practitioners and researchers to easily integrate our basis into existing FEM systems (<ref type="url">https://github.com/polyfem/polyfem</ref>). An interesting venue for future work is the integration of our technique with existing (and complementary) hp-refinement methods, to concentrate DOFs based on a posteriori error estimate and on the right-hand side. Similarly, it would be interesting to explore  the use of our technique in time-dependent simulations where the mesh is deformed at each iteration, dynamically changing the elements' quality, and whether it can be helpful to increase the time step. Despite the fact that most graphics (and engineering) applications uses elliptic PDEs, we plan extend our estimate to a larger class of non-elliptic PDEs, such as wave equations.</p><p>We believe that the benefits of our technique heavily outweigh the downsides: we expect our contribution to have a large impact not only on the design of black-box simulation pipelines, but also on improving the quality and reliability of every graphics algorithm relying on solving a discrete PDE. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A WEAK DERIVATIVE AND</head><p>x d f , with d = 2 in 2D and d = 3 in 3D. For instance, H 0 is the space of integrable functions, with the usual L 2 -norm &#8741; f &#8741; L 2 = &#8741; f &#8741; 0 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B P k BASIS</head><p>We use Lagrange polynomials to interpolate between the nodes in triangular and tetrahedral elements. The coefficients of those polynomials can be pre-computed beforehand for a fixed degree by solving a linear system formed by the Vandermond matrix of each polynomial basis evaluated at the nodes of the reference element. For convenience, we provide the explicit formulation for P 1 and P 2 both in 2D and 3D, while higher-order bases are provided in the source code. To make the origin of the shape parameter dependence in &#915;, we include a short derivation of the estimate in the special case of the Poisson equation with homogeneous Dirichlet boundary conditions, u| &#8706;&#8486; = 0. We emphasize that this derivation is not PDEspecific, and effectively the same steps, in a generalized and hence longer form are followed in the general case.</p><p>An important assumption required for this derivation is H 2 -regularity of the domain, that is f &#8712; L 2 , u &#8712; H 2 &#8745; H 1 0 and &#8741;u &#8741; 2 &#8804; C 1 (&#8486;)&#8741; f &#8741; 0 .</p><p>We define the inner products With this notation, the PDE and its finite element discretization can be written as, find u such that &#10216;u, v&#10217;| 1 = &#10216;f , v&#10217;| 0 , for any v &#8712; H 1 ; &#10216;u h , v h &#10217;| 1 = &#10216;f , v h &#10217;| 0 for any v h &#8712; V h . Let e = uu h be the approximation error, by using v = v h and subtracting two equations we obtain &#10216;e, v h &#10217;| 1 = 0 for any v h &#8712; V h which is the error orthogonality property.</p><p>(1) Estimate for |e | 1 . For an arbitrary function in</p><p>The last term vanishes, by error orthogonality, as v hu h &#8712; V h . By Cauchy-Schwarz inequality,</p><p>for the particular choice v h = I h u &#8712; V h .</p><p>(2) Apply interpolation gradient error. On an element E,</p><p>where h E is the maximal edge length and &#961; E is inscribed circle or sphere radius ( <ref type="bibr">[Ciarlet and Raviart 1972]</ref>). Therefore, the total interpolation error is bounded by C &#8242; max E h E &#963; E &#8741;u &#8741; 2 . This yields to the H 1 error estimate</p><p>(3) L 2 estimate via auxiliary Poisson problem. Let us consider the auxiliary system &#10216;w, v&#10217;| 1 = &#10216;e, v&#10217;| 0 for any v &#8712; H 1 , that is, the Poisson equation with the right-hand side equal to the error.</p><p>Applying the definition of w with v = e followed by error orthogonality, we get &#8741;e &#8741; 2 0 = &#10216;e, e&#10217;| 0 = &#10216;w, e&#10217;| 1 = &#10216;w -I h w, e&#10217;| 1 &#8804; |w -I h w | 1 |e | 1 . Using interpolation error the second time, we get</p><p>Finally, by H 2 regularity, &#8741;w &#8741; 2 &lt; C 1 (&#8486;)&#8741;e &#8741; 0 , as e is the right-hand side. Substituting this estimate and the estimate for |e | 1 into (7), and cancelling the factor |e | 0 we obtain</p><p>which is the expected estimate.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0"><p>ACM Trans. Graph., Vol. 37, No. 6, Article 280. Publication date: November 2018.</p></note>
		</body>
		</text>
</TEI>
