An efficient algorithm for the Lp -norm joint inversion of gravity and magnetic data using the cross-gradient constraint is presented. The presented framework incorporates stabilizers that use Lp -norms ( 0≤p≤2 ) of the model parameters, and/or the gradient of the model parameters. The formulation is developed from standard approaches for independent inversion of single data sets, and, thus, also facilitates the inclusion of necessary model and data weighting matrices, for example, depth weighting and hard constraint matrices. Using the block Toeplitz Toeplitz block structure of the underlying sensitivity matrices for gravity and magnetic models, when data are obtained on a uniform grid, the blocks for each layer of the depth are embedded in block circulant circulant block matrices. Then, all operations with these matrices are implemented efficiently using 2-D fast Fourier transforms, with a significant reduction in storage requirements. The nonlinear global objective function is minimized iteratively by imposing stationarity on the linear equation that results from applying linearization of the objective function about a starting model. To numerically solve the resulting linear system, at each iteration, the conjugate gradient algorithm is used. This is improved for large scale problems by the introduction of an algorithm in which updatesmore »
This content will become publicly available on May 4, 2023
Large-scale focusing joint inversion of gravity and magnetic data with Gramian constraint
SUMMARY A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problems more »
- Award ID(s):
- 1913136
- Publication Date:
- NSF-PAR ID:
- 10340904
- Journal Name:
- Geophysical Journal International
- Volume:
- 230
- Issue:
- 3
- Page Range or eLocation-ID:
- 1585 to 1611
- ISSN:
- 0956-540X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
SUMMARY We discuss the focusing inversion of potential field data for the recovery of sparse subsurface structures from surface measurement data on a uniform grid. For the uniform grid, the model sensitivity matrices have a block Toeplitz Toeplitz block structure for each block of columns related to a fixed depth layer of the subsurface. Then, all forward operations with the sensitivity matrix, or its transpose, are performed using the 2-D fast Fourier transform. Simulations are provided to show that the implementation of the focusing inversion algorithm using the fast Fourier transform is efficient, and that the algorithm can be realized on standard desktop computers with sufficient memory for storage of volumes up to size n ≈ 106. The linear systems of equations arising in the focusing inversion algorithm are solved using either Golub–Kahan bidiagonalization or randomized singular value decomposition algorithms. These two algorithms are contrasted for their efficiency when used to solve large-scale problems with respect to the sizes of the projected subspaces adopted for the solutions of the linear systems. The results confirm earlier studies that the randomized algorithms are to be preferred for the inversion of gravity data, and for data sets of size m it is sufficientmore »
-
We have developed a memory and operation-count efficient 2.5D inversion algorithm of electrical resistivity (ER) data that can handle fine discretization domains imposed by other geophysical (e.g, ground penetrating radar or seismic) data. Due to numerical stability criteria and available computational memory, joint inversion of different types of geophysical data can impose different grid discretization constraints on the model parameters. Our algorithm enables the ER data sensitivities to be directly joined with other geophysical data without the need of interpolating or coarsening the discretization. We have used the adjoint method directly in the discretized Maxwell’s steady state equation to compute the data sensitivity to the conductivity. In doing so, we make no finite-difference approximation on the Jacobian of the data and avoid the need to store large and dense matrices. Rather, we exploit matrix-vector multiplication of sparse matrices and find successful convergence using gradient descent for our inversion routine without having to resort to the Hessian of the objective function. By assuming a 2.5D subsurface, we are able to linearly reduce memory requirements when compared to a 3D gradient descent inversion, and by a power of two when compared to storing a 2D Hessian. Moreover, our method linearly outperforms operationmore »
-
Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-pointmore »
-
We develop a new computational framework to solve the partial differential equations (PDEs) governing the flow of the joint probability density functions (PDFs) in continuous-time stochastic nonlinear systems. The need for computing the transient joint PDFs subject to prior dynamics arises in uncertainty propagation, nonlinear filtering and stochastic control. Our methodology breaks away from the traditional approach of spatial discretization or function approximation – both of which, in general, suffer from the “curse-of-dimensionality”. In the proposed framework, we discretize time but not the state space. We solve infinite dimensional proximal recursions in the manifold of joint PDFs, which in the small time-step limit, is theoretically equivalent to solving the underlying transport PDEs. The resulting computation has the geometric interpretation of gradient flow of certain free energy functional with respect to the Wasserstein metric arising from the theory of optimal mass transport. We show that dualization along with an entropic regularization, leads to a cone-preserving fixed point recursion that is proved to be contractive in Thompson metric. A block co-ordinate iteration scheme is proposed to solve the resulting nonlinear recursions with guaranteed convergence. This approach enables remarkably fast computation for non-parametric transient joint PDF propagation. Numerical examples and various extensions aremore »