Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Coupled simulations of surface and variably saturated subsurface flow, termed integrated hydrologic models (IHMs), can provide powerful insights into the complex dynamics of watersheds. The system of governing equations solved by an IHM is non‐linear, making them a significant computational burden and challenging to accurately parameterize. Consequently, a large fraction of the IHM studies to date have been “numerical hypothesis testing” studies, but, as parallel computing continues to improve, IHMs are approaching the point where they might also be useful as predictive tools. For this to become reality, the predictive uncertainty of such highly parameterized simulations must be considered. However, uncertainty is seldom considered in the IHM literature, likely due to the long runtimes of the complex simulations. The questions considered herein are how much uncertainty is there in an IHM for a common watershed simulation scenario, and how likely is it that any one realization of a system will give the same relative change as any other due to a perturbation in recharge? A stochastic ensemble of 250 permeability field realizations was used to show that uncertainty in a high‐mountain headwaters systems is dominated by the subsurface. Recharge perturbation scenarios echo these results, but the uncertainty ofchangesin streamflow or groundwater pressure heads were significantly smaller than the uncertainty in their base‐case values. The main finding is that IHMs do provide confident, predictive estimates ofrelativechanges in watersheds, even when uncertainty in specific simulation outputs may be high.more » « less
-
Modern hydrologic models have extraordinary capabilities for representing complex process in surface-subsurface systems. These capabilities have revolutionized the way we conceptualize flow systems, but how to represent uncertainty in simulated flow systems is not as well developed. Currently, characterizing model uncertainty can be computationally expensive, in part, because the techniques are appended to the numerical methods rather than seamlessly integrated. The next generation of computers, however, presents opportunities to reformulate the modeling problem so that the uncertainty components are handled more directly within the flow system simulation. Misconceptions about quantum computing abound and they will not be a “silver bullet” for solving all complex problems, but they might be leveraged for certain kinds of highly uncertain problems, such as groundwater (GW). The point of this issue paper is that the GW community could try to revise the foundations of our models so that the governing equations being solved are tailored specifically for quantum computers. The goal moving forward should not just be to accelerate the models we have, but also to address their deficiencies. Embedding uncertainty into the models by evolving distribution functions will make predictive GW modeling more complicated, but doing so places the problem into a complexity class that is highly efficient on quantum computing hardware. Next generation GW models could put uncertainty into the problem at the very beginning of a simulation and leave it there throughout, providing a completely new way of simulating subsurface flows.more » « less
-
Abstract. Lagrangian particle tracking schemes allow a wide range of flow and transport processes to be simulated accurately, but a major challenge is numerically implementing the inter-particle interactions in an efficient manner. This article develops a multi-dimensional, parallelized domain decomposition (DDC) strategy for mass-transfer particle tracking (MTPT) methods in which particles exchange mass dynamically. We show that this can be efficiently parallelized by employing large numbers of CPU cores to accelerate run times. In order to validate the approach and our theoretical predictions we focus our efforts on a well-known benchmark problem with pure diffusion, where analytical solutions in any number of dimensions are well established. In this work, we investigate different procedures for “tiling” the domain in two and three dimensions (2-D and 3-D), as this type of formal DDC construction is currently limited to 1-D. An optimal tiling is prescribed based on physical problem parameters and the number of available CPU cores, as each tiling provides distinct results in both accuracy and run time. We further extend the most efficient technique to 3-D for comparison, leading to an analytical discussion of the effect of dimensionality on strategies for implementing DDC schemes. Increasing computational resources (cores) within the DDC method produces a trade-off between inter-node communication and on-node work.For an optimally subdivided diffusion problem, the 2-D parallelized algorithm achieves nearly perfect linear speedup in comparison with the serial run-up to around 2700 cores, reducing a 5 h simulation to 8 s, while the 3-D algorithm maintains appreciable speedup up to 1700 cores.more » « less
An official website of the United States government
