skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.

Title: Towards the geological parametrization of seismic tomography

Seismic tomography is a cornerstone of geophysics and has led to a number of important discoveries about the interior of the Earth. However, seismic tomography remains plagued by the large number of unknown parameters in most tomographic applications. This leads to the inverse problem being underdetermined and requiring significant non-geologically motivated smoothing in order to achieve unique answers. Although this solution is acceptable when using tomography as an explorative tool in discovery mode, it presents a significant problem to use of tomography in distinguishing between acceptable geological models or in estimating geologically relevant parameters since typically none of the geological models considered are fit by the tomographic results, even when uncertainties are accounted for. To address this challenge, when seismic tomography is to be used for geological model selection or parameter estimation purposes, we advocate that the tomography can be explicitly parametrized in terms of the geological models being tested instead of using more mathematically convenient formulations like voxels, splines or spherical harmonics. Our proposition has a number of technical difficulties associated with it, with some of the most important ones being the move from a linear to a non-linear inverse problem, the need to choose a geological parametrization that fits each specific problem and is commensurate with the expected data quality and structure, and the need to use a supporting framework to identify which model is preferred by the tomographic data. In this contribution, we introduce geological parametrization of tomography with a few simple synthetic examples applied to imaging sedimentary basins and subduction zones, and one real-world example of inferring basin and crustal properties across the continental United States. We explain the challenges in moving towards more realistic examples, and discuss the main technical difficulties and how they may be overcome. Although it may take a number of years for the scientific program suggested here to reach maturity, it is necessary to take steps in this direction if seismic tomography is to develop from a tool for discovering plausible structures to one in which distinct scientific inferences can be made regarding the presence or absence of structures and their physical characteristics.

more » « less
Award ID(s):
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Geophysical Journal International
Medium: X Size: p. 1447-1462
["p. 1447-1462"]
Sponsoring Org:
National Science Foundation
More Like this

    The near-surface seismic structure (to a depth of about 1000 m), particularly the shear wave velocity (VS), can strongly affect the propagation of seismic waves and, therefore, must be accurately calibrated for ground motion simulations and seismic hazard assessment. The VS of the top (<300 m) crust is often well characterized from borehole studies, geotechnical measurements, and water and oil wells, while the velocities of the material deeper than about 1000 m are typically determined by tomography studies. However, in depth ranges lacking information on shallow lithological stratification, typically rock sites outside the sedimentary basins, the material parameters between these two regions are typically poorly characterized due to resolution limits of seismic tomography. When the alluded geological constraints are not available, models, such as the Southern California Earthquake Center (SCEC) Community Velocity Models (CVMs), default to regional tomographic estimates that do not resolve the uppermost VS values, and therefore deliver unrealistically high shallow VS estimates. The SCEC Unified Community Velocity Model (UCVM) software includes a method to incorporate the near-surface earth structure by applying a generic overlay based on measurements of time-averaged VS in top 30 m (VS30) to taper the upper part of the model to merge with tomography at a depth of 350 m, which can be applied to any of the velocity models accessible through UCVM. However, our 3-D simulations of the 2014 Mw 5.1 La Habra earthquake in the Los Angeles area using the CVM-S4.26.M01 model significantly underpredict low-frequency (<1 Hz) ground motions at sites where the material properties in the top 350 m are significantly modified by the generic overlay (‘taper’). On the other hand, extending the VS30-based taper of the shallow velocities down to a depth of about 1000 m improves the fit between our synthetics and seismic data at those sites, without compromising the fit at well-constrained sites. We explore various tapering depths, demonstrating increasing amplification as the tapering depth increases, and the model with 1000 m tapering depth yields overall favourable results. Effects of varying anelastic attenuation are small compared to effects of velocity tapering and do not significantly bias the estimated tapering depth. Although a uniform tapering depth is adopted in the models, we observe some spatial variabilities that may further improve our method.

    more » « less

    Differences between P- and S-wave models have been frequently used as evidence for the presence of large-scale compositional heterogeneity in the Earth's mantle. Our two-step machine learning (ML) analysis of 28 P- and S-wave global tomographic models reveals that, on a global scale, such differences are for the most part not intrinsic and could be reduced by changing the models in their respective null spaces. In other words, P- and S-wave images of mantle structure are not necessarily distinct from each other. Thus, a purely thermal explanation for large-scale seismic structure is sufficient at present; significant mantle compositional heterogeneities do not need to be invoked. We analyse 28 widely used tomographic models based on various theoretical approximations ranging from ray theory (e.g. UU-P07 and MIT-P08), Born scattering (e.g. DETOX) and full-waveform techniques (e.g. CSEM and GLAD). We apply Varimax principal component analysis to reduce tomography model dimensionality by 83 percent, while preserving relevant information (94 percent of the original variance), followed by hierarchical clustering (HC) analysis using Ward's method to quantitatively categorize all models into hierarchical groups based on similarities. We found two main tomography model clusters: Cluster 1, which we called ‘Pure P wave’, is composed of six P-wave models that only use longitudinal body wave phases (e.g. P, PP and Pdiff); and Cluster 2, which we called ‘Mixed’, includes both P- and S-wave models. P-wave models in the ‘Mixed’ cluster use inversion methods that include inputs from other geophysical and geological data sources, and this causes them to be more similar to S-wave models than Pure P-wave models without significant loss of fitness to P-wave data. Given that inclusion of new data classes and seismic phases in more recent tomographic models significantly changes imaged seismic structure, our ML assessment of global tomography model similarity may improve selection of appropriate P- and S-wave models for future global tomography comparative studies.

    more » « less

    Muography is an imaging tool based on the attenuation of cosmic muons for observing density anomalies associated with large objects, such as underground caves or fractured zones. Tomography based on muography measurements, that is, 3-D reconstruction of density distribution from 2-D muon flux maps, brings along special challenges. The detector field of view covering must be as balanced as possible, considering the muon flux drop at high zenith angles and the detector placement possibilities. The inversion from directional muon fluxes to a 3-D density map is usually underdetermined (more voxels than measurements). Therefore, the solution of the inversion can be unstable due to partial coverage. The instability can be solved by geologically relevant Bayesian constraints. However, the Bayesian principle results in parameter bias and artefacts. In this work, linearized (density-length based) inversion is applied by formulating the constraints associated with inversion to ensure the stability of parameter fitting. After testing the procedure on synthetic examples, an actual high-quality muography measurement data set from seven positions is used as input for the inversion. The resulting tomographic imaging provides details on the complicated internal structures of karstic fracture zone. The existence of low density zones in the imaged space was verified by samples from core drills, which consist of altered dolomite powder within the intact high density dolomite.

    more » « less
  4. Three-dimensional (3D) refractive index (RI) tomography has recently become an exciting new tool for biological studies. However, its limitation to (1) thin samples resulting from a need of transmissive illumination and (2) small fields of view (typically∼<#comment/>50µ<#comment/>m×<#comment/>50µ<#comment/>m) has hindered its utility in broader biomedical applications. In this work, we demonstrate 3D RI tomography with a large field of view in opaque, arbitrarily thick scattering samples (unsuitable for imaging with conventional transmissive tomographic techniques) with a penetration depth of ca. one mean free scattering path length (∼<#comment/>100µ<#comment/>min tissue) using a simple, low-cost microscope system with epi-illumination. This approach leverages a solution to the inverse scattering problem via the general non-paraxial 3D optical transfer function of our quantitative oblique back-illumination microscopy (qOBM) optical system. A theoretical analysis is presented along with simulations and experimental validations using polystyrene beads, and rat and human thick brain tissues. This work has significant implications for the investigation of optically thick, semi-infinite samples in a non-invasive and label-free manner. This unique 3D qOBM approach can extend the utility of 3D RI tomography for translational and clinical medicine.

    more » « less
  5. Abstract Data-driven reduced order models (ROMs) recently emerged as powerful tool for the solution of inverse scattering problems. The main drawback of this approach is that it was limited to measurement arrays with reciprocally collocated transmitters and receivers, that is, square symmetric matrix (data) transfer functions. To relax this limitation, we use our previous work Druskin et al (2021 Inverse Problems 37 075003), where the ROMs were combined with the Lippmann–Schwinger integral equation to produce a direct nonlinear inversion method. In this work we extend this approach to more general transfer functions, including those that are non-symmetric, e.g., obtained by adding only receivers or sources. The ROM is constructed based on the symmetric subset of the data and is used to construct all internal solutions. Remaining receivers are then used directly in the Lippmann–Schwinger equation. We demonstrate the new approach on a number of 1D and 2D examples with non-reciprocal arrays, including a single input/multiple outputs inverse problem, where the data is given by just a single-row matrix transfer function. This allows us to approach the flexibility of the Born approximation in terms of acceptable measurement arrays; at the same time significantly improving the quality of the inversion compared to the latter for strongly nonlinear scattering effects. 
    more » « less