skip to main content


Title: A parallel variational mesh quality improvement method for tetrahedral meshes
There are numerous large-scale applications requiring mesh adaptivity, e.g., computational fluid dynamics and weather prediction. Parallel processing is needed for simulations involving large-scale adaptive meshes. In this paper, we propose a parallel variational mesh quality improvement algorithm for use with distributed memory machines. Our method parallelizes the serial variational mesh quality improvement method by Huang and Kamenski. Their approach is based on the use of the Moving Mesh PDE method to adapt the mesh based on the minimization of an energy functional for mesh equidistribution and alignment. This leads to a system of ordinary differential equations (ODEs) to be solved which determine where to move the interior mesh nodes. An efficient solution is obtained by solving the ODEs on subregions of the mesh with overlapped communication and computation. Strong and weak scaling experiments on up to 128 cores for meshes with up to 160M elements demonstrate excellent results.  more » « less
Award ID(s):
1717894 1500487 1808553
NSF-PAR ID:
10174100
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 28th International Meshing Roundtable
Page Range / eLocation ID:
37-49
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large‐scale sampling‐based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right‐hand side. The stochastic PDE is discretized using the mixed finite element method on an embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. We demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large‐scale 3D MLMC simulations with up to 1.9·109unknowns.

     
    more » « less
  2. Osteoarthritis of the knee is increasingly prevalent as our population ages, representing an increasing financial burden, and severely impacting quality of life. The invasiveness of in vivo procedures and the high cost of cadaveric studies has left computational tools uniquely suited to study knee biomechanics. Developments in deep learning have great potential for efficiently generating large-scale datasets to enable researchers to perform population-sized investigations, but the time and effort associated with producing robust hexahedral meshes has been a limiting factor in expanding finite element studies to encompass a population. Here we developed a fully automated pipeline capable of taking magnetic resonance knee images and producing a working finite element simulation. We trained an encoder-decoder convolutional neural network to perform semantic image segmentation on the Imorphics dataset provided through the Osteoarthritis Initiative. The Imorphics dataset contained 176 image sequences with varying levels of cartilage degradation. Starting from an open-source swept-extrusion meshing algorithm, we further developed this algorithm until it could produce high quality meshes for every sequence and we applied a template-mapping procedure to automatically place soft-tissue attachment points. The meshing algorithm produced simulation-ready meshes for all 176 sequences, regardless of the use of provided (manually reconstructed) or predicted (automatically generated) segmentation labels. The average time to mesh all bones and cartilage tissues was less than 2 min per knee on an AMD Ryzen 5600X processor, using a parallel pool of three workers for bone meshing, followed by a pool of four workers meshing the four cartilage tissues. Of the 176 sequences with provided segmentation labels, 86% of the resulting meshes completed a simulated flexion-extension activity. We used a reserved testing dataset of 28 sequences unseen during network training to produce simulations derived from predicted labels. We compared tibiofemoral contact mechanics between manual and automated reconstructions for the 24 pairs of successful finite element simulations from this set, resulting in mean root-mean-squared differences under 20% of their respective min-max norms. In combination with further advancements in deep learning, this framework represents a feasible pipeline to produce population sized finite element studies of the natural knee from subject-specific models. 
    more » « less
  3. null (Ed.)
    SUMMARY Large-scale modelling of 3-D controlled-source electromagnetic (CSEM) surveys used to be feasible only for large companies and research consortia. This has changed over the last few years, and today there exists a selection of different open-source codes available to everyone. Using four different codes in the Python ecosystem, we perform simulations for increasingly complex models in a shallow marine setting. We first verify the computed fields with semi-analytical solutions for a simple layered model. Then we validate the responses of a more complex block model by comparing results obtained from each code. Finally, we compare the responses of a real-world model with results from the industry. On the one hand, these validations show that the open-source codes are able to compute comparable CSEM responses for challenging, large-scale models. On the other hand, they show many general and method-dependent problems that need to be faced for obtaining accurate results. Our comparison includes finite-element and finite-volume codes using structured rectilinear and octree meshes as well as unstructured tetrahedral meshes. Accurate responses can be obtained independently of the chosen method and the chosen mesh type. The runtime and memory requirements vary greatly based on the choice of iterative or direct solvers. However, we have found that much more time was spent on designing the mesh and setting up the simulations than running the actual computation. The challenging task is, irrespective of the chosen code, to appropriately discretize the model. We provide three models, each with their corresponding discretization and responses of four codes, which can be used for validation of new and existing codes. The collaboration of four code maintainers trying to achieve the same task brought in the end all four codes a significant step further. This includes improved meshing and interpolation capabilities, resulting in shorter runtimes for the same accuracy. We hope that these results may be useful for the CSEM community at large and that we can build over time a suite of benchmarks that will help to increase the confidence in existing and new 3-D CSEM codes. 
    more » « less
  4. Summary

    This paper presents a variational multiscale (VMS) based finite element method where the stabilization parameter is computed dynamically. The current dynamic procedure takes in a general structure/form of the stabilization parameter with unknown coefficients and computes them dynamically in a local fashion resulting in a dynamic VMS‐based finite element method. Thus, a static stabilization parameter with pre‐defined coefficients is not needed. A variational Germano identity (VGI) based local procedure suitable for unstructured meshes is developed to perform the dynamic computation in a local fashion. The local VGI based procedure is applied for each interior vertex in the mesh and unknown coefficients are first determined locally at each vertex, and subsequently, for each element a maximum value is taken over the vertices of the element. To make the current procedure practical, a coarser secondary solution is constructed from the primary coarse‐scale solution, which is done locally over a patch of elements around each interior vertex. Further, averaging steps are employed to make the local dynamic procedure robust. Currently, the new dynamic VMS formulation is applied to steady problems governed by the advection‐diffusion and incompressible Navier‐Stokes equations in both 1D and 2D to demonstrate its efficacy and effectiveness.

     
    more » « less
  5. Summary

    The quasicontinuum (QC) method is a concurrent scale‐bridging technique that extends atomistic accuracy to significantly larger length scales by reducing the full atomic ensemble to a small set of representative atoms and using interpolation to recover the motion of all lattice sites where full atomistic resolution is not necessary. While traditional QC methods thereby create interfaces between fully resolved and coarse‐grained regions, the recently introduced fully nonlocal QC framework does not fundamentally differentiate between atomistic and coarsened domains. Adding adaptive refinement enables us to tie atomistic resolution to evolving regions of interest such as moving defects. However, model adaptivity is challenging because large particle motion is described based on a reference mesh (even in the atomistic regions). Unlike in the context of, for example, finite element meshes, adaptivity here requires that (i) all vertices lie on a discrete point set (the atomic lattice), (ii) model refinement is performed locally and provides sufficient mesh quality, and (iii) Verlet neighborhood updates in the atomistic domain are performed against a Lagrangian mesh. With the suite of adaptivity tools outlined here, the nonlocal QC method is shown to bridge across scales from atomistics to the continuum in a truly seamless fashion, as illustrated for nanoindentation and void growth. Copyright © 2016 John Wiley & Sons, Ltd.

     
    more » « less