skip to main content


Title: Optimized wavelet‐based adaptive mesh refinement algorithm for numerical modeling of three‐dimensional global‐scale atmospheric chemical transport
Abstract

Substantial numerical difficulties associated with the computational modeling of multiscale global atmospheric chemical transport impose severe limitations on the spatial resolution of nonadaptive fixed grids. The crude spatial discretization introduces a large amount of numerical diffusion into the system, which, in combination with strong flow stretching, causes large numerical errors. To resolve this issue, we have developed an optimized wavelet‐based adaptive mesh refinement (OWAMR) method. The OWAMR is a three‐dimensional adaptive method that introduces a fine grid dynamically only in the regions where small spatial structures occur. The algorithm uses a new two‐parameter adaptation criterion that significantly (by factors between 1.5 and 2.7) reduces the number of grid points compared with the more conventional one‐parameter grid adaptation used by wavelet‐based adaptive techniques and high‐order upwind schemes, which enable one to increase the accuracy of approximation of the advection operator substantially. It has been shown that the method simulates the dynamics of a pollution plume that travels on a global scale, producing less than 3% error. To achieve such accuracy, conventional three‐dimensional nonadaptive techniques would require five orders of magnitude more computational resources. Therefore, the method provides a realistic opportunity to model accurately a variety of the most demanding multiscale problems in the area of atmospheric chemical transport, which are difficult or impossible to simulate on existing computational facilities with conventional fixed‐grid techniques.

 
more » « less
Award ID(s):
1832089
NSF-PAR ID:
10456547
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Quarterly Journal of the Royal Meteorological Society
Volume:
146
Issue:
729
ISSN:
0035-9009
Page Range / eLocation ID:
p. 1564-1574
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY

    Combining finite element methods for the incompressible Stokes equations with particle-in-cell methods is an important technique in computational geodynamics that has been widely applied in mantle convection, lithosphere dynamics and crustal-scale modelling. In these applications, particles are used to transport along properties of the medium such as the temperature, chemical compositions or other material properties; the particle methods are therefore used to reduce the advection equation to an ordinary differential equation for each particle, resulting in a problem that is simpler to solve than the original equation for which stabilization techniques are necessary to avoid oscillations.

    On the other hand, replacing field-based descriptions by quantities only defined at the locations of particles introduces numerical errors. These errors have previously been investigated, but a complete understanding from both the theoretical and practical sides was so far lacking. In addition, we are not aware of systematic guidance regarding the question of how many particles one needs to choose per mesh cell to achieve a certain accuracy.

    In this paper we modify two existing instantaneous benchmarks and present two new analytic benchmarks for time-dependent incompressible Stokes flow in order to compare the convergence rate and accuracy of various combinations of finite elements, particle advection and particle interpolation methods. Using these benchmarks, we find that in order to retain the optimal accuracy of the finite element formulation, one needs to use a sufficiently accurate particle interpolation algorithm. Additionally, we observe and explain that for our higher-order finite-element methods it is necessary to increase the number of particles per cell as the mesh resolution increases (i.e. as the grid cell size decreases) to avoid a reduction in convergence order.

    Our methods and results allow designing new particle-in-cell methods with specific convergence rates, and also provide guidance for the choice of common building blocks and parameters such as the number of particles per cell. In addition, our new time-dependent benchmark provides a simple test that can be used to compare different implementations, algorithms and for the assessment of new numerical methods for particle interpolation and advection. We provide a reference implementation of this benchmark in aspect (the ‘Advanced Solver for Problems in Earth’s ConvecTion’), an open source code for geodynamic modelling.

     
    more » « less
  2. Abstract

    Particle accelerators are invaluable discovery engines in the chemical, biological and physical sciences. Characterization of the accelerated beam response to accelerator input parameters is often the first step when conducting accelerator-based experiments. Currently used techniques for characterization, such as grid-like parameter sampling scans, become impractical when extended to higher dimensional input spaces, when complicated measurement constraints are present, or prior information known about the beam response is scarce. Here in this work, we describe an adaptation of the popular Bayesian optimization algorithm, which enables a turn-key exploration of input parameter spaces. Our algorithm replaces  the need for parameter scans while minimizing prior information needed about the measurement’s behavior and associated measurement constraints. We experimentally demonstrate that our algorithm autonomously conducts an adaptive, multi-parameter exploration of input parameter space, potentially orders of magnitude faster than conventional grid-like parameter scans, while making highly constrained, single-shot beam phase-space measurements and accounts for costs associated with changing input parameters. In addition to applications in accelerator-based scientific experiments, this algorithm addresses challenges shared by many scientific disciplines, and is thus applicable to autonomously conducting experiments over a broad range of research topics.

     
    more » « less
  3. Surface albedo is of crucial interest in land–climate interaction studies, since it is a key parameter that affects the Earth’s radiation budget. The temporal and spatial variation of surface albedo can be retrieved from conventional satellite observations after a series of processes, including atmospheric correction to surface spectral bi-directional reflectance factor (BRF), bi-directional reflectance distribution function (BRDF) modelling using these BRFs, and, where required, narrow-to-broadband albedo conversions. This processing chain introduces errors that can be accumulated and then affect the accuracy of the retrieved albedo products. In this study, the albedo products derived from the multi-angle imaging spectroradiometer (MISR), moderate resolution imaging spectroradiometer (MODIS) and the Copernicus Global Land Service (CGLS), based on the VEGETATION and now the PROBA-V sensors, are compared with albedometer and upscaled in situ measurements from 19 tower sites from the FLUXNET network, surface radiation budget network (SURFRAD) and Baseline Surface Radiation Network (BSRN) networks. The MISR sensor onboard the Terra satellite has 9 cameras at different view angles, which allows a near-simultaneous retrieval of surface albedo. Using a 16-day retrieval algorithm, the MODIS generates the daily albedo products (MCD43A) at a 500-m resolution. The CGLS albedo products are derived from the VEGETATION and PROBA-V, and updated every 10 days using a weighted 30-day window. We describe a newly developed method to derive the two types of albedo, which are directional hemispherical reflectance (DHR) and bi-hemispherical reflectance (BHR), directly from three tower-measured variables of shortwave radiation: downwelling, upwelling and diffuse shortwave radiation. In the validation process, the MISR, MODIS and CGLS-derived albedos (DHR and BHR) are first compared with tower measured albedos, using pixel-to-point analysis, between 2012 to 2016. The tower measured point albedos are then upscaled to coarse-resolution albedos, based on atmospherically corrected BRFs from high-resolution Earth observation (HR-EO) data, alongside MODIS BRDF climatology from a larger area. Then a pixel-to-pixel comparison is performed between DHR and BHR retrieved from coarse-resolution satellite observations and DHR and BHR upscaled from accurate tower measurements. The experimental results are presented on exploring the parameter space associated with land cover type, heterogeneous vs. homogeneous and instantaneous vs. time composite retrievals of surface albedo. 
    more » « less
  4. Abstract

    For data assimilation to provide faithful state estimates for dynamical models, specifications of observation uncertainty need to be as accurate as possible. Innovation-based methods based on Desroziers diagnostics, are commonly used to estimate observation uncertainty, but such methods can depend greatly on the prescribed background uncertainty. For ensemble data assimilation, this uncertainty comes from statistics calculated from ensemble forecasts, which require inflation and localization to address under sampling. In this work, we use an ensemble Kalman filter (EnKF) with a low-dimensional Lorenz model to investigate the interplay between the Desroziers method and inflation. Two inflation techniques are used for this purpose: 1) a rigorously tuned fixed multiplicative scheme and 2) an adaptive state-space scheme. We document how inaccuracies in observation uncertainty affect errors in EnKF posteriors and study the combined impacts of misspecified initial observation uncertainty, sampling error, and model error on Desroziers estimates. We find that whether observation uncertainty is over- or underestimated greatly affects the stability of data assimilation and the accuracy of Desroziers estimates and that preference should be given to initial overestimates. Inline estimates of Desroziers tend to remove the dependence between ensemble spread–skill and the initially prescribed observation error. In addition, we find that the inclusion of model error introduces spurious correlations in observation uncertainty estimates. Further, we note that the adaptive inflation scheme is less robust than fixed inflation at mitigating multiple sources of error. Last, sampling error strongly exacerbates existing sources of error and greatly degrades EnKF estimates, which translates into biased Desroziers estimates of observation error covariance.

    Significance Statement

    To generate accurate predictions of various components of the Earth system, numerical models require an accurate specification of state variables at our current time. This step adopts a probabilistic consideration of our current state estimate versus information provided from environmental measurements of the true state. Various strategies exist for estimating uncertainty in observations within this framework, but are sensitive to a host of assumptions, which are investigated in this study.

     
    more » « less
  5. In some applications, it is reasonable to assume that geodesics (rays) have a consistent orientation so that a time-harmonic elastic wave equation may be viewed as an evolution equation in one of the spatial directions. With such applications in mind, motivated by our recent work [Hadamard- Babich ansatz for point-source elastic wave equations in variable media at high frequencies, Multiscale Model Simul. 19/1 (2021) 46–86], we propose a new truncated Hadamard-Babich ansatz based globally valid asymptotic method, dubbed the fast Huygens sweeping method, for computing Green’s functions of frequency-domain point-source elastic wave equations in inhomogeneous media in the high-frequency asymptotic regime and in the presence of caustics. The first novelty of the fast Huygens sweeping method is that the Huygens-Kirchhoff secondary-source principle is used to integrate many locally valid asymptotic solutions to yield a globally valid asymptotic solution so that caustics can be treated automatically. This yields uniformly accurate solutions both near the source and away from it. The second novelty is that a butterfly algorithm is adapted to accelerate matrix-vector products induced by the Huygens-Kirchhoff integral. The new method enjoys the following desired features: (1) it treats caustics automatically; (2) precomputed asymptotic ingredients can be used to construct Green’s functions of elastic wave equations for many different point sources and for arbitrary frequencies; (3) given a specified number of points per wavelength, it constructs Green’s functions in nearly optimal complexity O(N logN) in terms of the total number of mesh points N, where the prefactor of the complexity depends only on the specified accuracy and is independent of the frequency parameter. Three-dimensional numerical examples are presented to demonstrate the performance and accuracy of the new method. 
    more » « less