skip to main content

This content will become publicly available on July 1, 2024

Title: Optimized time-lapse acquisition design via spectral gap ratio minimization
Modern-day reservoir management and monitoring of geologic carbon storage increasingly call for costly time-lapse seismic data collection. We demonstrate how techniques from graph theory can be used to optimize acquisition geometries for low-cost sparse 4D seismic data. Based on midpoint-offset-domain connectivity arguments, our algorithm automatically produces sparse nonreplicated time-lapse acquisition geometries that favor wavefield recovery.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Page Range / eLocation ID:
A19 to A23
Medium: X
Sponsoring Org:
National Science Foundation
More Like this

    Repeatedly recording seismic data over a period of months or years is one way to identify trapped oil and gas and to monitor CO2 injection in underground storage reservoirs and saline aquifers. This process of recording data over time and then differencing the images assumes the recording of the data over a particular subsurface region is repeatable. In other words, the hope is that one can recover changes in the Earth when the survey parameters are held fixed between data collection times. Unfortunately, perfect experimental repeatability almost never occurs. Acquisition inconsistencies such as changes in weather (currents, wind) for marine seismic data are inevitable, resulting in source and receiver location differences between surveys at the very least. Thus, data processing aimed at improving repeatability between baseline and monitor surveys is extremely useful. One such processing tool is regularization (or binning) that aligns multiple surveys with different source or receiver configurations onto a common grid. Data binned onto a regular grid can be stored in a high-dimensional data structure called a tensor with, for example, x and y receiver coordinates and time as indices of the tensor. Such a higher-order data structure describing a subsection of the Earth often exhibits redundancies which one can exploit to fill in gaps caused by sampling the surveys onto the common grid. In fact, since data gaps and noise increase the rank of the tensor, seeking to recover the original data by reducing the rank (low-rank tensor-based completion) successfully fills in gaps caused by binning. The tensor nuclear norm (TNN) is defined by the tensor singular value decomposition (tSVD) which generalizes the matrix SVD. In this work we complete missing time-lapse data caused by binning using the alternating direction method of multipliers (or ADMM) to minimize the TNN. For a synthetic experiment with three parabolic events in which the time-lapse difference involves an amplitude increase in one of these events between baseline and monitor data sets, the binning and reconstruction algorithm (TNN-ADMM) correctly recovers this time-lapse change. We also apply this workflow of binning and TNN-ADMM reconstruction to a real marine survey from offshore Western Australia in which the binning onto a regular grid results in significant data gaps. The data after reconstruction varies continuously without the large gaps caused by the binning process.

    more » « less
  2. null (Ed.)
    Surface-based 2D electrical resistivity tomography (ERT) surveys were used to characterize permafrost distribution at wetland sites on the alluvial plain north of the Tanana River, 20 km southwest of Fairbanks, Alaska, in June and September 2014. The sites were part of an ecologically-sensitive research area characterizing biogeochemical response of this region to warming and permafrost thaw, and the site contained landscape features characteristic of interior Alaska, including thermokarst bog, forested permafrost plateau, and a rich fen. The results show how vegetation reflects shallow (0–10 m depth) permafrost distribution. Additionally, we saw shallow (0–3 m depth) low resistivity areas in forested permafrost plateau potentially indicating the presence of increased unfrozen water content as a precursor to ground instability and thaw. Time-lapse study from June to September suggested a depth of seasonal influence extending several meters below the active layer, potentially as a result of changes in unfrozen water content. A comparison of several electrode geometries (dipole-dipole, extended dipole-dipole, Wenner-Schlumberger) showed that for depths of interest to our study (0–10 m) results were similar, but data acquisition time with dipole-dipole was the shortest, making it our preferred geometry. The results show the utility of ERT surveys to characterize permafrost distribution at these sites, and how vegetation reflects shallow permafrost distribution. These results are valuable information for ecologically sensitive areas where ground-truthing can cause excessive disturbance. ERT data can be used to characterize the exact subsurface geometry of permafrost such that over time an understanding of changing permafrost conditions can be made in great detail. Characterizing the depth of thaw and thermal influence from the surface in these areas also provides important information as an indication of the depth to which carbon storage and microbially-mediated carbon processing may be affected. 
    more » « less
  3. null (Ed.)
    Reconstruction of sparsely sampled seismic data is critical for maintaining the quality of seismic images when significant numbers of shots and receivers are missing.We present a reconstruction method in the shot-receiver-time (SRT) domain based on a residual U-Net machine learning architecture, for seismic data acquired in a sparse 2-D acquisition and name it SRT2D-ResU-Net. The SRT domain retains a high level of seismic signal connectivity, which is likely the main data feature that the reconstructing algorithms rely on. We develop an “in situ training and prediction” workflow by dividing the acquisition area into two nonoverlapping subareas: a training subarea for establishing the network model using regularly sampled data and a testing subarea for reconstructing the sparsely sampled data using the trained model. To establish a reference base for analyzing the changes in data features over the study area, and quantifying the reconstructed seismic data, we devise a baseline reference using a tiny portion of the field data. The baselines are properly spaced and excluded from the training and reconstruction processes. The results on a field marine data set show that the SRT2D-ResU-Net can effectively learn the features of seismic data in the training process, and the average correlation between the reconstructed missing traces and the true answers is over 85%. 
    more » « less
  4. SUMMARY Physics-based simulations provide a path to overcome the lack of observational data hampering a holistic understanding of earthquake faulting and crustal deformation across the vastly varying space–time scales governing the seismic cycle. However, simulations of sequences of earthquakes and aseismic slip (SEAS) including the complex geometries and heterogeneities of the subsurface are challenging. We present a symmetric interior penalty discontinuous Galerkin (SIPG) method to perform SEAS simulations accounting for the aforementioned challenges. Due to the discontinuous nature of the approximation, the spatial discretization natively provides a means to impose boundary and interface conditions. The method accommodates 2-D and 3-D domains, is of arbitrary order, handles subelement variations in material properties and supports isoparametric elements, that is, high-order representations of the exterior boundaries, interior material interfaces and embedded faults. We provide an open-source reference implementation, Tandem, that utilizes highly efficient kernels for evaluating the SIPG linear and bilinear forms, is inherently parallel and well suited to perform high-resolution simulations on large-scale distributed memory architectures. Additional flexibility and efficiency is provided by optionally defining the displacement evaluation via a discrete Green’s function approach, exploiting advantages of both the boundary integral and volumetric methods. The optional discrete Green’s functions are evaluated once in a pre-computation stage using algorithmically optimal and scalable sparse parallel solvers and pre-conditioners. We illustrate the characteristics of the SIPG formulation via an extensive suite of verification problems (analytic, manufactured and code comparison) for elastostatic and quasi-dynamic problems. Our verification suite demonstrates that high-order convergence of the discrete solution can be achieved in space and time and highlights the benefits of using a high-order representation of the displacement, material properties and geometries. We apply Tandem to realistic demonstration models consisting of a 2-D SEAS multifault scenario on a shallowly dipping normal fault with four curved splay faults, and a 3-D intersecting multifault scenario of elastostatic instantaneous displacement of the 2019 Ridgecrest, CA, earthquake sequence. We exploit the curvilinear geometry representation in both application examples and elucidate the importance of accurate stress (or displacement gradient) representation on-fault. This study entails several methodological novelties. We derive a sharp bound on the smallest value of the SIPG penalty ensuring stability for isotropic, elastic materials; define a new flux to incorporate embedded faults in a standard SIPG scheme; employ a hybrid multilevel pre-conditioner for the discrete elasticity problem; and demonstrate that curvilinear elements are specifically beneficial for volumetric SEAS simulations. We show that our method can be applied for solving interesting geophysical problems using massively parallel computing. Finally, this is the first time a discontinuous Galerkin method is published for the numerical simulations of SEAS, opening new avenues to pursue extreme scale 3-D SEAS simulations in the future. 
    more » « less
  5. Geologic carbon storage represents one of the few truly scalable technologies capable of reducing the CO 2 concentration in the atmosphere. While this technology has the potential to scale, its success hinges on our ability to mitigate its risks. An important aspect of risk mitigation concerns assurances that the injected CO 2 remains within the storage complex. Among the different monitoring modalities, seismic imaging stands out due to its ability to attain high-resolution and high-fidelity images. However, these superior features come at prohibitive costs and time-intensive efforts that potentially render extensive seismic monitoring undesirable. To overcome this shortcoming, we present a methodology in which time-lapse images are created by inverting nonreplicated time-lapse monitoring data jointly. By no longer insisting on replication of the surveys to obtain high-fidelity time-lapse images and differences, extreme costs and time-consuming labor are averted. To demonstrate our approach, hundreds of realistic synthetic noisy time-lapse seismic data sets are simulated that contain imprints of regular CO 2 plumes and irregular plumes that leak. These time-lapse data sets are subsequently inverted to produce time-lapse difference images that are used to train a deep neural classifier. The testing results show that the classifier is capable of detecting CO 2 leakage automatically on unseen data with reasonable accuracy. We consider the use of this classifier as a first step in the development of an automatic workflow designed to handle the large number of continuously monitored CO 2 injection sites needed to help combat climate change. 
    more » « less