skip to main content


Title: An Efficient Alternating Algorithm for the Lp-Norm Cross-Gradient Joint Inversion of Gravity and Magnetic Data Using the 2-D Fast Fourier Transform
An efficient algorithm for the Lp -norm joint inversion of gravity and magnetic data using the cross-gradient constraint is presented. The presented framework incorporates stabilizers that use Lp -norms ( 0≤p≤2 ) of the model parameters, and/or the gradient of the model parameters. The formulation is developed from standard approaches for independent inversion of single data sets, and, thus, also facilitates the inclusion of necessary model and data weighting matrices, for example, depth weighting and hard constraint matrices. Using the block Toeplitz Toeplitz block structure of the underlying sensitivity matrices for gravity and magnetic models, when data are obtained on a uniform grid, the blocks for each layer of the depth are embedded in block circulant circulant block matrices. Then, all operations with these matrices are implemented efficiently using 2-D fast Fourier transforms, with a significant reduction in storage requirements. The nonlinear global objective function is minimized iteratively by imposing stationarity on the linear equation that results from applying linearization of the objective function about a starting model. To numerically solve the resulting linear system, at each iteration, the conjugate gradient algorithm is used. This is improved for large scale problems by the introduction of an algorithm in which updates for the magnetic and gravity parameter models are alternated at each iteration, further reducing total computational cost and storage requirements. Numerical results using a complicated 3-D synthetic model and real data sets obtained over the Galinge iron-ore deposit in the Qinghai province, north-west (NW) of China, demonstrate the efficiency of the presented algorithm.  more » « less
Award ID(s):
1913136
NSF-PAR ID:
10340906
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE Transactions on Geoscience and Remote Sensing
Volume:
60
ISSN:
0196-2892
Page Range / eLocation ID:
1 to 16
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY

    A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems with respect to both computational costs and memory demands, and to solve the non-linear problem by applying iterative methods that rely only on matrix–vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a non-linear joint inversion algorithm, with Lp-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that while the p = 1 choice provides sparse reconstructed solutions with sharp boundaries, it is also possible to use p = 2 in order to provide smooth and blurred models. The methodology is used for inverting gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St Francois Terrane, southeast of Missouri, USA.

     
    more » « less
  2. null (Ed.)
    SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada. 
    more » « less
  3. SUMMARY

    Within the iron metallogenic province of southeast Missouri, USA, there are several mines that contain not only economic iron resources, magnetite and/or hematite, but also contain rare earth elements, copper and gold. An area including three major deposits, Pea Ridge, Bourbon and Kratz Spring, was selected for detailed modelling for the upper crustal magnetic susceptibility and density structures. For the study area, ground gravity and high-resolution airborne magnetic and gravity gradiometry data sets are available. An efficient and novel joint inversion algorithm for the simultaneous inversion of these multiple data sets is presented. The Gramian coupling constraint is used to correlate the reconstructed density and magnetic susceptibility models. The implementation relies on the structures of the sensitivity matrices and an efficient minimization algorithm to achieve significant reductions in the memory requirements and computational costs. Consequently, it is feasible to use a laptop computer for the inversion of multiple data sets, each containing thousands of data points, for the recovery of models on the study area, each including approximately one million model parameters. This is the first time that these multiple data sets have been simultaneously inverted for this area. The L1-norm stabilizer is used to provide compact and focused images of the ore deposits. For contrast, independent inversions of each data set are also discussed. In general, our results provide new insights about the concealed ore deposits in the Mesoproterozoic basement rocks of southeast Missouri. Both short- and long-wavelength anomalies exist in the recovered models; these provide a high-resolution image of the subsurface. The geometry and physical properties of the known deposits are determined very well. Additionally, some unknown concealed deposits are revealed; these could be economically valuable and should be considered in future geophysical and geological investigations.

     
    more » « less
  4. SUMMARY

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility of designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

     
    more » « less
  5. The focusing inversion of gravity and magnetic potential-field data using the randomized singular value decomposition (RSVD) method is considered. This approach facilitates tackling the computational challenge that arises in the solution of the inversion problem that uses the standard and accurate approximation of the integral equation kernel. We have developed a comprehensive comparison of the developed methodology for the inversion of magnetic and gravity data. The results verify that there is an important difference between the application of the methodology for gravity and magnetic inversion problems. Specifically, RSVD is dependent on the generation of a rank [Formula: see text] approximation to the underlying model matrix, and the results demonstrate that [Formula: see text] needs to be larger, for equivalent problem sizes, for the magnetic problem compared to the gravity problem. Without a relatively large [Formula: see text], the dominant singular values of the magnetic model matrix are not well approximated. We determine that this is due to the spectral properties of the matrix. The comparison also shows us how the use of the power iteration embedded within the randomized algorithm improves the quality of the resulting dominant subspace approximation, especially in magnetic inversion, yielding acceptable approximations for smaller choices of [Formula: see text]. Further, we evaluate how the differences in spectral properties of the magnetic and gravity input matrices also affect the values that are automatically estimated for the regularization parameter. The algorithm is applied and verified for the inversion of magnetic data obtained over a portion of the Wuskwatim Lake region in Manitoba, Canada. 
    more » « less