skip to main content


Title: A fast methodology for large-scale focusing inversion of gravity and magnetic data using the structured model matrix and the 2-D fast Fourier transform
SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficient to use projected spaces of size approximately m/8. For the inversion of magnetic data sets, we show that it is more efficient to use the Golub–Kahan bidiagonalization, and that it is again sufficient to use projected spaces of size approximately m/8. Simulations support the presented conclusions and are verified for the inversion of a magnetic data set obtained over the Wuskwatim Lake region in Manitoba, Canada.  more » « less
Award ID(s):
1913136
NSF-PAR ID:
10274774
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Geophysical Journal International
Volume:
223
Issue:
2
ISSN:
0956-540X
Page Range / eLocation ID:
1378 to 1397
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY

    A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems with respect to both computational costs and memory demands, and to solve the non-linear problem by applying iterative methods that rely only on matrix–vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a non-linear joint inversion algorithm, with Lp-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that while the p = 1 choice provides sparse reconstructed solutions with sharp boundaries, it is also possible to use p = 2 in order to provide smooth and blurred models. The methodology is used for inverting gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St Francois Terrane, southeast of Missouri, USA.

     
    more » « less
  2. An efficient algorithm for the Lp -norm joint inversion of gravity and magnetic data using the cross-gradient constraint is presented. The presented framework incorporates stabilizers that use Lp -norms ( 0≤p≤2 ) of the model parameters, and/or the gradient of the model parameters. The formulation is developed from standard approaches for independent inversion of single data sets, and, thus, also facilitates the inclusion of necessary model and data weighting matrices, for example, depth weighting and hard constraint matrices. Using the block Toeplitz Toeplitz block structure of the underlying sensitivity matrices for gravity and magnetic models, when data are obtained on a uniform grid, the blocks for each layer of the depth are embedded in block circulant circulant block matrices. Then, all operations with these matrices are implemented efficiently using 2-D fast Fourier transforms, with a significant reduction in storage requirements. The nonlinear global objective function is minimized iteratively by imposing stationarity on the linear equation that results from applying linearization of the objective function about a starting model. To numerically solve the resulting linear system, at each iteration, the conjugate gradient algorithm is used. This is improved for large scale problems by the introduction of an algorithm in which updates for the magnetic and gravity parameter models are alternated at each iteration, further reducing total computational cost and storage requirements. Numerical results using a complicated 3-D synthetic model and real data sets obtained over the Galinge iron-ore deposit in the Qinghai province, north-west (NW) of China, demonstrate the efficiency of the presented algorithm. 
    more » « less
  3. Summary

    The Golub‐Kahan bidiagonalization is widely used in the singular value decomposition of rectangular matrices and has been generalized to an iterative solver for symmetric indefinite linear systems with a two‐by‐two block structure. In this work, we present a scalability study of this generalized solver as implemented in a recent release of the parallel numerical library PETSc (Portable, Extensible Toolkit for Scientific Computation). We present an improved solver performance for the two‐dimensional (2D) Stokes equations as compared to previous work. Furthermore, we investigate the performance of different parallel inner solvers in the outer Golub‐Kahan iteration for a three‐dimensional Stokes problem. The study includes parallel sparse direct solvers and multigrid methods. When increasing the number of cores for a fixed total problem size, the solver exhibits good speedups of up to 50% at the 1024 core count. For the tests in which the total problem size grows while the workload in each core stays constant, the parallel performance of the solver scales almost linearly with the increase in the core counts. In particular, the computation time increases only by about 15% when the number of cores increases from 80 to 1024 for a 2D test case.

     
    more » « less
  4. null (Ed.)
    Abstract Randomized methods can be competitive for the solution of problems with a large matrix of low rank. They also have been applied successfully to the solution of large-scale linear discrete ill-posed problems by Tikhonov regularization (Xiang and Zou in Inverse Probl 29:085008, 2013). This entails the computation of an approximation of a partial singular value decomposition of a large matrix A that is of numerical low rank. The present paper compares a randomized method to a Krylov subspace method based on Golub–Kahan bidiagonalization with respect to accuracy and computing time and discusses characteristics of linear discrete ill-posed problems that make them well suited for solution by a randomized method. 
    more » « less
  5. Randomized matrix algorithms have had significant recent impact on numerical linear algebra. One especially powerful class of methods are algorithms for approximate matrix multiplication based on sampling. Such methods typically sample individual matrix rows and columns using carefully chosen importance sampling probabilities. However, due to practical considerations like memory locality and the preservation of matrix structure, it is often preferable to sample contiguous blocks of rows and columns all together. Recently, (Wu, 2018) addressed this setting by developing an approximate matrix multiplication method based on block sampling. However, the method is inefficient, as it requires knowledge of optimal importance sampling probabilities that are expensive to compute. We address this issue by showing that the method of Wu can be accelerated through the use of a randomized implicit trace estimation method. Doing so allows us to provably reduce the cost of sampling to near-linear in the size of the matrices being multiplied, without impacting the accuracy of the final approximate matrix multiplication. Overall, this yields a fast practical algorithm, which we test on a number of synthetic and real-world data sets. We complement our algorithmic contribution with the first extensive empirical comparison of block algorithms for randomized matrix multiplication. Our method offers a significant runtime advantage over the method of (Wu, 2018) and also outperforms basic uniform sampling of blocks. However, we find another recent method of (Charalambides, 2021) which uses sub-optimal but efficiently computable sampling probabilities often (but not always) offers the best trade-off between speed and accuracy. 
    more » « less