Repeatedly recording seismic data over a period of months or years is one way to identify trapped oil and gas and to monitor CO2 injection in underground storage reservoirs and saline aquifers. This process of recording data over time and then differencing the images assumes the recording of the data over a particular subsurface region is repeatable. In other words, the hope is that one can recover changes in the Earth when the survey parameters are held fixed between data collection times. Unfortunately, perfect experimental repeatability almost never occurs. Acquisition inconsistencies such as changes in weather (currents, wind) for marine seismic data are inevitable, resulting in source and receiver location differences between surveys at the very least. Thus, data processing aimed at improving repeatability between baseline and monitor surveys is extremely useful. One such processing tool is regularization (or binning) that aligns multiple surveys with different source or receiver configurations onto a common grid. Data binned onto a regular grid can be stored in a highdimensional data structure called a tensor with, for example, x and y receiver coordinates and time as indices of the tensor. Such a higherorder data structure describing a subsection of the Earth often exhibits redundancies which one can exploit to fill in gaps caused by sampling the surveys onto the common grid. In fact, since data gaps and noise increase the rank of the tensor, seeking to recover the original data by reducing the rank (lowrank tensorbased completion) successfully fills in gaps caused by binning. The tensor nuclear norm (TNN) is defined by the tensor singular value decomposition (tSVD) which generalizes the matrix SVD. In this work we complete missing timelapse data caused by binning using the alternating direction method of multipliers (or ADMM) to minimize the TNN. For a synthetic experiment with three parabolic events in which the timelapse difference involves an amplitude increase in one of these events between baseline and monitor data sets, the binning and reconstruction algorithm (TNNADMM) correctly recovers this timelapse change. We also apply this workflow of binning and TNNADMM reconstruction to a real marine survey from offshore Western Australia in which the binning onto a regular grid results in significant data gaps. The data after reconstruction varies continuously without the large gaps caused by the binning process.
An improved seismic data completion algorithm using lowrank tensor optimization: Cost reduction and optimal data orientation
Seismic data are often incomplete due to equipment malfunction, limited source and receiver placement at near and far offsets, and missing crossline data. Seismic data contain redundancies because they are repeatedly recorded over the same or adjacent subsurface regions, causing the data to have a lowrank structure. To recover missing data, one can organize the data into a multidimensional array or tensor and apply a tensor completion method. We can increase the effectiveness and efficiency of lowrank data reconstruction based on tensor singular value decomposition (tSVD) by analyzing the effect of tensor orientation and exploiting the conjugate symmetry of the multidimensional Fourier transform. In fact, these results can be generalized to any order tensor. Relating the singular values of the tSVD to those of a matrix leads to a simplified analysis, revealing that the most square orientation gives the best data structure for lowrank reconstruction. After the first step of the tSVD, a multidimensional Fourier transform, frontal slices of the tensor form conjugate pairs. For each pair, a singular value decomposition can be replaced with a much cheaper conjugate calculation, allowing for faster computation of the tSVD. Using conjugate symmetry in our improved tSVD algorithm reduces the runtime of the inner loop by 35%–50%. We consider synthetic and real seismic data sets from the Viking Graben Region and the Northwest Shelf of Australia arranged as highdimensional tensors. We compare the tSVDbased reconstruction with traditional methods, projection onto convex sets and multichannel singular spectrum analysis, and we see that the tSVDbased method gives similar or better accuracy and is more efficient, converging with runtimes that are an order of magnitude faster than the traditional methods. In addition, we verify that the most square orientation improves recovery for these examples by 10%–20% compared with the other orientations.
more »
« less
 Award ID(s):
 1846690
 NSFPAR ID:
 10223753
 Date Published:
 Journal Name:
 GEOPHYSICS
 Volume:
 86
 Issue:
 3
 ISSN:
 00168033
 Page Range / eLocation ID:
 V219 to V232
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

SUMMARY 
null (Ed.)Abstract Higherorder tensors can represent scores in a rating system, frames in a video, and images of the same subject. In practice, the measurements are often highly quantized due to the sampling strategies or the quality of devices. Existing works on tensor recovery have focused on data losses and random noises. Only a few works consider tensor recovery from quantized measurements but are restricted to binary measurements. This paper, for the first time, addresses the problem of tensor recovery from multilevel quantized measurements by leveraging the low CANDECOMP/PARAFAC (CP) rank property. We study the recovery of both general lowrank tensors and tensors that have tensor singular value decomposition (TSVD) by solving nonconvex optimization problems. We provide the theoretical upper bounds of the recovery error, which diminish to zero when the sizes of dimensions increase to infinity. We further characterize the fundamental limit of any recovery algorithm and show that our recovery error is nearly orderwise optimal. A tensorbased alternating proximal gradient descent algorithm with a convergence guarantee and a TSVDbased projected gradient descent algorithm are proposed to solve the nonconvex problems. Our recovery methods can also handle data losses and do not necessarily need the information of the quantization rule. The methods are validated on synthetic data, image datasets, and music recommender datasets.more » « less

The CP tensor decomposition is used in applications such as machine learning and signal processing to discover latent lowrank structure in multidimensional data. Computing a CP decomposition via an alternating least squares (ALS) method reduces the problem to several linear least squares problems. The standard way to solve these linear least squares subproblems is to use the normal equations, which inherit special tensor structure that can be exploited for computational efficiency. However, the normal equations are sensitive to numerical illconditioning, which can compromise the results of the decomposition. In this paper, we develop versions of the CPALS algorithm using the QR decomposition and the singular value decomposition, which are more numerically stable than the normal equations, to solve the linear least squares problems. Our algorithms utilize the tensor structure of the CPALS subproblems efficiently, have the same complexity as the standard CPALS algorithm when the input is dense and the rank is small, and are shown via examples to produce more stable results when illconditioning is present. Our MATLAB implementation achieves the same running time as the standard algorithm for small ranks, and we show that the new methods can obtain lower approximation error.more » « less

We propose a new fast streaming algorithm for the tensor completion problem of imputing missing entries of a lowtubalrank tensor using the tensor singular value decomposition (tSVD) algebraic framework. We show the tSVD is a specialization of the wellstudied blockterm decomposition for thirdorder tensors, and we present an algorithm under this model that can track changing free submodules from incomplete streaming 2D data. The proposed algorithm uses principles from incremental gradient descent on the Grassmann manifold of subspaces to solve the tensor completion problem with linear complexity and constant memory in the number of time samples. We provide a local expected linear convergence result for our algorithm. Our empirical results are competitive in accuracy but much faster in compute time than stateoftheart tensor completion algorithms on real applications to recover temporal chemosensing and MRI data under limited sampling.more » « less

We propose a new fast streaming algorithm for the tensor completion problem of imputing missing entries of a lowtubalrank tensor using the tensor singular value decomposition (tSVD) algebraic framework. We show the tSVD is a specialization of the wellstudied blockterm decomposition for thirdorder tensors, and we present an algorithm under this model that can track changing free submodules from incomplete streaming 2D data. The proposed algorithm uses principles from incremental gradient descent on the Grassmann manifold of subspaces to solve the tensor completion problem with linear complexity and constant memory in the number of time samples. We provide a local expected linear convergence result for our algorithm. Our empirical results are competitive in accuracy but much faster in compute time than stateoftheart tensor completion algorithms on real applications to recover temporal chemosensing and MRI data under limited sampling.more » « less