skip to main content


Title: Large-scale focusing joint inversion of gravity and magnetic data with Gramian constraint
SUMMARY

A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems with respect to both computational costs and memory demands, and to solve the non-linear problem by applying iterative methods that rely only on matrix–vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a non-linear joint inversion algorithm, with Lp-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that while the p = 1 choice provides sparse reconstructed solutions with sharp boundaries, it is also possible to use p = 2 in order to provide smooth and blurred models. The methodology is used for inverting gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St Francois Terrane, southeast of Missouri, USA.

 
more » « less
Award ID(s):
1913136
NSF-PAR ID:
10405643
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Geophysical Journal International
Volume:
230
Issue:
3
ISSN:
0956-540X
Format(s):
Medium: X Size: p. 1585-1611
Size(s):
["p. 1585-1611"]
Sponsoring Org:
National Science Foundation
More Like this
  1. An efficient algorithm for the Lp -norm joint inversion of gravity and magnetic data using the cross-gradient constraint is presented. The presented framework incorporates stabilizers that use Lp -norms ( 0≤p≤2 ) of the model parameters, and/or the gradient of the model parameters. The formulation is developed from standard approaches for independent inversion of single data sets, and, thus, also facilitates the inclusion of necessary model and data weighting matrices, for example, depth weighting and hard constraint matrices. Using the block Toeplitz Toeplitz block structure of the underlying sensitivity matrices for gravity and magnetic models, when data are obtained on a uniform grid, the blocks for each layer of the depth are embedded in block circulant circulant block matrices. Then, all operations with these matrices are implemented efficiently using 2-D fast Fourier transforms, with a significant reduction in storage requirements. The nonlinear global objective function is minimized iteratively by imposing stationarity on the linear equation that results from applying linearization of the objective function about a starting model. To numerically solve the resulting linear system, at each iteration, the conjugate gradient algorithm is used. This is improved for large scale problems by the introduction of an algorithm in which updates for the magnetic and gravity parameter models are alternated at each iteration, further reducing total computational cost and storage requirements. Numerical results using a complicated 3-D synthetic model and real data sets obtained over the Galinge iron-ore deposit in the Qinghai province, north-west (NW) of China, demonstrate the efficiency of the presented algorithm. 
    more » « less
  2. null (Ed.)
    SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada. 
    more » « less
  3. SUMMARY

    Within the iron metallogenic province of southeast Missouri, USA, there are several mines that contain not only economic iron resources, magnetite and/or hematite, but also contain rare earth elements, copper and gold. An area including three major deposits, Pea Ridge, Bourbon and Kratz Spring, was selected for detailed modelling for the upper crustal magnetic susceptibility and density structures. For the study area, ground gravity and high-resolution airborne magnetic and gravity gradiometry data sets are available. An efficient and novel joint inversion algorithm for the simultaneous inversion of these multiple data sets is presented. The Gramian coupling constraint is used to correlate the reconstructed density and magnetic susceptibility models. The implementation relies on the structures of the sensitivity matrices and an efficient minimization algorithm to achieve significant reductions in the memory requirements and computational costs. Consequently, it is feasible to use a laptop computer for the inversion of multiple data sets, each containing thousands of data points, for the recovery of models on the study area, each including approximately one million model parameters. This is the first time that these multiple data sets have been simultaneously inverted for this area. The L1-norm stabilizer is used to provide compact and focused images of the ore deposits. For contrast, independent inversions of each data set are also discussed. In general, our results provide new insights about the concealed ore deposits in the Mesoproterozoic basement rocks of southeast Missouri. Both short- and long-wavelength anomalies exist in the recovered models; these provide a high-resolution image of the subsurface. The geometry and physical properties of the known deposits are determined very well. Additionally, some unknown concealed deposits are revealed; these could be economically valuable and should be considered in future geophysical and geological investigations.

     
    more » « less
  4. We have developed a memory and operation-count efficient 2.5D inversion algorithm of electrical resistivity (ER) data that can handle fine discretization domains imposed by other geophysical (e.g, ground penetrating radar or seismic) data. Due to numerical stability criteria and available computational memory, joint inversion of different types of geophysical data can impose different grid discretization constraints on the model parameters. Our algorithm enables the ER data sensitivities to be directly joined with other geophysical data without the need of interpolating or coarsening the discretization. We have used the adjoint method directly in the discretized Maxwell’s steady state equation to compute the data sensitivity to the conductivity. In doing so, we make no finite-difference approximation on the Jacobian of the data and avoid the need to store large and dense matrices. Rather, we exploit matrix-vector multiplication of sparse matrices and find successful convergence using gradient descent for our inversion routine without having to resort to the Hessian of the objective function. By assuming a 2.5D subsurface, we are able to linearly reduce memory requirements when compared to a 3D gradient descent inversion, and by a power of two when compared to storing a 2D Hessian. Moreover, our method linearly outperforms operation counts when compared with 3D Gauss-Newton conjugate-gradient schemes, which scales cubically in our favor with respect to the thickness of the 3D domain. We physically appraise the domain of the recovered conductivity using a cutoff of the electric current density present in our survey. We evaluate two case studies to assess the validity of our algorithm. First, on a 2.5D synthetic example, and then on field data acquired in a controlled alluvial aquifer, where we were able to match the recovered conductivity to borehole observations. 
    more » « less
  5. Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid. 
    more » « less