Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
SUMMARY Geological Carbon Storage (GCS) is one of the most viable climate-change mitigating net-negative CO2-emission technologies for large-scale CO2 sequestration. However, subsurface complexities and reservoir heterogeneity demand a systematic approach to uncertainty quantification to ensure both containment and conformance, as well as to optimize operations. As a step toward a digital twin for monitoring and control of underground storage, we introduce a new machine-learning-based data-assimilation framework validated on realistic numerical simulations. The proposed digital shadow combines simulation-based inference (SBI) with a novel neural adaptation of a recently developed nonlinear ensemble filtering technique. To characterize the posterior distribution of CO2 plume states (saturation and pressure) conditioned on multimodal time-lapse data, consisting of imaged surface seismic and well-log data, a generic recursive scheme is employed, where neural networks are trained on simulated ensembles for the time-advanced state and observations. Once trained, the digital shadow infers the state as time-lapse field data become available. Unlike ensemble Kalman filtering, corrections to predicted states are computed via a learned nonlinear prior-to-posterior mapping that supports non-Gaussian statistics and nonlinear models for the dynamics and observations. Training and inference are facilitated by the combined use of conditional invertible neural networks and bespoke physics-based summary statistics. Starting with a probabilistic permeability model derived from a baseline seismic survey, the digital shadow is validated against unseen simulated ground-truth time-lapse data. Results show that injection-site-specific uncertainty in permeability can be incorporated into state uncertainty, and the highest reconstruction quality is achieved when conditioning on both seismic and wellbore data. Despite incomplete permeability knowledge, the digital shadow accurately tracks the subsurface state throughout a realistic CO2 injection project. This work establishes the first proof-of-concept for an uncertainty-aware, scalable digital shadow, laying the foundation for a digital twin to optimize underground storage operations.more » « less
-
Abstract Due to their uncertainty quantification, Bayesian solutions to inverse problems are the framework of choice in applications that are risk averse. These benefits come at the cost of computations that are in general, intractable. New advances in machine learning and variational inference (VI) have lowered this computational barrier by leveraging data-driven learning. Two VI paradigms have emerged that represent different tradeoffs: amortized and non-amortized. Amortized VI can produce fast results but due to generalizing to many observed datasets it produces suboptimal inference results. Non-amortized VI is slower at inference but finds better posterior approximations since it is specialized towards a single observed dataset. Current amortized VI techniques run into a sub-optimality wall that cannot be improved without more expressive neural networks or extra training data. We present a solution that enables iterative improvement of amortized posteriors that uses the same networks architectures and training data. The benefits of our method requires extra computations but these remain frugal since they are based on physics-hybrid methods and summary statistics. Importantly, these computations remain mostly offline thus our method maintains cheap and reusable online evaluation while bridging the optimality gap between these two paradigms. We denote our proposed methodASPIRE-Amortized posteriors withSummaries that arePhysics-based andIterativelyREfined. We first validate our method on a stylized problem with a known posterior then demonstrate its practical use on a high-dimensional and nonlinear transcranial medical imaging problem with ultrasound. Compared with the baseline and previous methods in the literature, ASPIRE stands out as an computationally efficient and high-fidelity method for posterior inference.more » « lessFree, publicly-accessible full text available March 14, 2026
-
WISER: Multimodal variational inference for full-waveform inversion without dimensionality reductionWe develop a semiamortized variational inference (VI) framework designed for computationally feasible uncertainty quantification in full-waveform inversion to explore the multimodal posterior distribution without dimensionality reduction. The framework is called full-waveform VI via subsurface extensions with refinements (WISER). WISER builds on top of a supervised generative artificial intelligence method that performs approximate amortized inference that is low-cost albeit showing an amortization gap. This gap is closed through nonamortized refinements that make frugal use of wave physics. Case studies illustrate that WISER is capable of full-resolution, computationally feasible, and reliable uncertainty estimates of velocity models and imaged reflectivities.more » « lessFree, publicly-accessible full text available March 1, 2026
-
We introduce a probabilistic technique for full-waveform inversion, using variational inference and conditional normalizing flows to quantify uncertainty in migration-velocity models and its impact on imaging. Our approach integrates generative artificial intelligence with physics-informed common-image gathers, reducing reliance on accurate initial velocity models. Considered case studies demonstrate its efficacy producing realizations of migration-velocity models conditioned by the data. These models are used to quantify amplitude and positioning effects during subsequent imaging.more » « less
-
Normalizing flows is a density estimation method that provides efficient exact likelihood estimation and sampling (Dinh et al., 2014) from high-dimensional distributions. This method depends on the use of the change of variables formula, which requires an invertible transform. Thus normalizing flow architectures are built to be invertible by design (Dinh et al., 2014). In theory, the invertibility of architectures constrains the expressiveness, but the use of coupling layers allows normalizing flows to exploit the power of arbitrary neural networks, which do not need to be invertible, (Dinh et al., 2016) and layer invertibility means that, if properly implemented, many layers can be stacked to increase expressiveness without creating a training memory bottleneck. The package we present, InvertibleNetworks.jl, is a pure Julia (Bezanson et al., 2017) imple- mentation of normalizing flows. We have implemented many relevant neural network layers, including GLOW 1x1 invertible convolutions (Kingma & Dhariwal, 2018), affine/additive coupling layers (Dinh et al., 2014), Haar wavelet multiscale transforms (Haar, 1909), and Hierarchical invertible neural transport (HINT) (Kruse et al., 2021), among others. These modular layers can be easily composed and modified to create different types of normalizing flows. As starting points, we have implemented RealNVP, GLOW, HINT, Hyperbolic networks (Lensink et al., 2022) and their conditional counterparts for users to quickly implement their individual applications.more » « less
-
The industry is experiencing significant changes due to artificial intelligence (AI) and the challenges of the energy transition. While some view these changes as threats, recent advances in AI offer unique opportunities, especially in the context of “digital twins” for subsurface monitoring and control.more » « less
-
We present the Seismic Laboratory for Imaging and Modeling/Monitoring open-source software framework for computational geophysics and, more generally, inverse problems involving the wave equation (e.g., seismic and medical ultrasound), regularization with learned priors, and learned neural surrogates for multiphase flow simulations. By integrating multiple layers of abstraction, the software is designed to be both readable and scalable, allowing researchers to easily formulate problems in an abstract fashion while exploiting the latest developments in high-performance computing. The design principles and their benefits are illustrated and demonstrated by means of building a scalable prototype for permeability inversion from time-lapse crosswell seismic data, which, aside from coupling of wave physics and multiphase flow, involves machine learning.more » « less
An official website of the United States government
