skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Tensor-based reconstruction applied to regularized time-lapse data
SUMMARY Repeatedly recording seismic data over a period of months or years is one way to identify trapped oil and gas and to monitor CO2 injection in underground storage reservoirs and saline aquifers. This process of recording data over time and then differencing the images assumes the recording of the data over a particular subsurface region is repeatable. In other words, the hope is that one can recover changes in the Earth when the survey parameters are held fixed between data collection times. Unfortunately, perfect experimental repeatability almost never occurs. Acquisition inconsistencies such as changes in weather (currents, wind) for marine seismic data are inevitable, resulting in source and receiver location differences between surveys at the very least. Thus, data processing aimed at improving repeatability between baseline and monitor surveys is extremely useful. One such processing tool is regularization (or binning) that aligns multiple surveys with different source or receiver configurations onto a common grid. Data binned onto a regular grid can be stored in a high-dimensional data structure called a tensor with, for example, x and y receiver coordinates and time as indices of the tensor. Such a higher-order data structure describing a subsection of the Earth often exhibits redundancies which one can exploit to fill in gaps caused by sampling the surveys onto the common grid. In fact, since data gaps and noise increase the rank of the tensor, seeking to recover the original data by reducing the rank (low-rank tensor-based completion) successfully fills in gaps caused by binning. The tensor nuclear norm (TNN) is defined by the tensor singular value decomposition (tSVD) which generalizes the matrix SVD. In this work we complete missing time-lapse data caused by binning using the alternating direction method of multipliers (or ADMM) to minimize the TNN. For a synthetic experiment with three parabolic events in which the time-lapse difference involves an amplitude increase in one of these events between baseline and monitor data sets, the binning and reconstruction algorithm (TNN-ADMM) correctly recovers this time-lapse change. We also apply this workflow of binning and TNN-ADMM reconstruction to a real marine survey from offshore Western Australia in which the binning onto a regular grid results in significant data gaps. The data after reconstruction varies continuously without the large gaps caused by the binning process.  more » « less
Award ID(s):
1846690
PAR ID:
10368614
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Geophysical Journal International
Volume:
231
Issue:
1
ISSN:
0956-540X
Page Range / eLocation ID:
p. 638-649
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Seismic data are often incomplete due to equipment malfunction, limited source and receiver placement at near and far offsets, and missing crossline data. Seismic data contain redundancies because they are repeatedly recorded over the same or adjacent subsurface regions, causing the data to have a low-rank structure. To recover missing data, one can organize the data into a multidimensional array or tensor and apply a tensor completion method. We can increase the effectiveness and efficiency of low-rank data reconstruction based on tensor singular value decomposition (tSVD) by analyzing the effect of tensor orientation and exploiting the conjugate symmetry of the multidimensional Fourier transform. In fact, these results can be generalized to any order tensor. Relating the singular values of the tSVD to those of a matrix leads to a simplified analysis, revealing that the most square orientation gives the best data structure for low-rank reconstruction. After the first step of the tSVD, a multidimensional Fourier transform, frontal slices of the tensor form conjugate pairs. For each pair, a singular value decomposition can be replaced with a much cheaper conjugate calculation, allowing for faster computation of the tSVD. Using conjugate symmetry in our improved tSVD algorithm reduces the runtime of the inner loop by 35%–50%. We consider synthetic and real seismic data sets from the Viking Graben Region and the Northwest Shelf of Australia arranged as high-dimensional tensors. We compare the tSVD-based reconstruction with traditional methods, projection onto convex sets and multichannel singular spectrum analysis, and we see that the tSVD-based method gives similar or better accuracy and is more efficient, converging with runtimes that are an order of magnitude faster than the traditional methods. In addition, we verify that the most square orientation improves recovery for these examples by 10%–20% compared with the other orientations. 
    more » « less
  2. Event detection is gaining increasing attention in smart cities research. Large-scale mobility data serves as an important tool to uncover the dynamics of urban transportation systems, and more often than not the dataset is incomplete. In this article, we develop a method to detect extreme events in large traffic datasets, and to impute missing data during regular conditions. Specifically, we propose a robust tensor recovery problem to recover low-rank tensors under fiber-sparse corruptions with partial observations, and use it to identify events, and impute missing data under typical conditions. Our approach is scalable to large urban areas, taking full advantage of the spatio-temporal correlations in traffic patterns. We develop an efficient algorithm to solve the tensor recovery problem based on the alternating direction method of multipliers (ADMM) framework. Compared with existing l 1 norm regularized tensor decomposition methods, our algorithm can exactly recover the values of uncorrupted fibers of a low-rank tensor and find the positions of corrupted fibers under mild conditions. Numerical experiments illustrate that our algorithm can achieve exact recovery and outlier detection even with missing data rates as high as 40% under 5% gross corruption, depending on the tensor size and the Tucker rank of the low rank tensor. Finally, we apply our method on a real traffic dataset corresponding to downtown Nashville, TN and successfully detect the events like severe car crashes, construction lane closures, and other large events that cause significant traffic disruptions. 
    more » « less
  3. Geologic carbon storage represents one of the few truly scalable technologies capable of reducing the CO 2 concentration in the atmosphere. While this technology has the potential to scale, its success hinges on our ability to mitigate its risks. An important aspect of risk mitigation concerns assurances that the injected CO 2 remains within the storage complex. Among the different monitoring modalities, seismic imaging stands out due to its ability to attain high-resolution and high-fidelity images. However, these superior features come at prohibitive costs and time-intensive efforts that potentially render extensive seismic monitoring undesirable. To overcome this shortcoming, we present a methodology in which time-lapse images are created by inverting nonreplicated time-lapse monitoring data jointly. By no longer insisting on replication of the surveys to obtain high-fidelity time-lapse images and differences, extreme costs and time-consuming labor are averted. To demonstrate our approach, hundreds of realistic synthetic noisy time-lapse seismic data sets are simulated that contain imprints of regular CO 2 plumes and irregular plumes that leak. These time-lapse data sets are subsequently inverted to produce time-lapse difference images that are used to train a deep neural classifier. The testing results show that the classifier is capable of detecting CO 2 leakage automatically on unseen data with reasonable accuracy. We consider the use of this classifier as a first step in the development of an automatic workflow designed to handle the large number of continuously monitored CO 2 injection sites needed to help combat climate change. 
    more » « less
  4. Coded aperture X-ray CT (CAXCT) is a new low-dose imaging technology that promises far-reaching benefits in industrial and clinical applications. It places various coded apertures (CA) at a time in front of the X-ray source to partially block the radiation. The ill-posed inverse reconstruction problem is then solved using l1-norm-based iterative reconstruction methods. Unfortunately, to attain high-quality reconstructions, the CA patterns must change in concert with the view-angles making the implementation impractical. This paper proposes a simple yet radically different approach to CAXCT, which is coined StaticCodeCT, that uses a single-static CA in the CT gantry, thus making the imaging system amenable for practical implementations. Rather than using conventional compressed sensing algorithms for recovery, we introduce a new reconstruction framework for StaticCodeCT. Namely, we synthesize the missing measurements using low-rank tensor completion principles that exploit the multi-dimensional data correlation and low-rank nature of a 3-way tensor formed by stacking the 2D coded CT projections. Then, we use the FDK algorithm to recover the 3D object. Computational experiments using experimental projection measurements exhibit up to 10% gains in the normalized root mean square distance of the reconstruction using the proposed method compared with those attained by alternative low-dose systems. 
    more » « less
  5. ABSTRACT Earthquake ground motions in the vicinity of receivers couple with the atmosphere to generate pressure perturbations that are detectable by infrasound sensors. These so-called local infrasound signals traverse very short source-to-receiver paths, so that they often exhibit a remarkable correlation with seismic velocity waveforms at collocated seismic stations, and there exists a simple relationship between vertical seismic velocity and pressure time series. This study leverages the large regional network of infrasound sensors in Alaska to examine local infrasound from several light to great Alaska earthquakes. We estimate seismic velocity time series from infrasound pressure records and use these converted infrasound recordings to compute earthquake magnitudes. This technique has potential utility beyond the novelty of recording seismic velocities on pressure sensors. Because local infrasound amplitudes from ground motions are small, it is possible to recover seismic velocities at collocated sites where the broadband seismometers have clipped. Infrasound-derived earthquake magnitudes exhibit good agreement with seismically derived values. This proof-of-concept demonstration of computing seismic magnitudes from infrasound sensors illustrates that infrasound sensors may be utilized as proxy vertical-component seismometers, making a new data set available for existing seismic techniques. Because single-sensor infrasound stations are relatively inexpensive and are becoming ubiquitous, this technique could be used to augment existing regional seismic networks using a readily available sensor platform. 
    more » « less