skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.


Title: Improved Characterization of Ultralow‐Velocity Zones Through Advances in Bayesian Inversion of ScP Waveforms
Abstract

Ultralow‐velocity zones (ULVZs) have been studied using a variety of seismic phases; however, their physical origin is still poorly understood. Short period ScP waveforms are extensively used to infer ULVZ properties because they may be sensitive to all ULVZ elastic moduli and thickness. However, ScP waveforms are additionally complicated by the effects of path attenuation, coherent noise, and source complexity. To address these complications, we developed a hierarchical Bayesian inversion method that allows us to invert ScP waveforms from multiple events simultaneously and accounts for path attenuation and correlated noise. The inversion method is tested with synthetic predictions which show that the inclusion of attenuation is imperative to recover ULVZ parameters accurately and that the ULVZ thickness and S‐wave velocity decrease are most reliably recovered. Utilizing multiple events simultaneously reduces the effects of coherent noise and source time function complexity, which in turn allows for the inclusion of more data to be used in the analyses. We next applied the method to ScP data recorded in Australia for 291 events that sample the core‐mantle boundary beneath the Coral Sea. Our results indicate, on average, ∼12‐km thick ULVZ with ∼14% reduction in S‐wave velocity across the region, but there is a greater variability in ULVZ properties in the south than that in the north of the sampled region. P‐wave velocity reductions and density perturbations are mostly below 10%. These ScP data show more than one ScP post‐cursor in some areas which may indicate complex 3‐D ULVZ structures.

 
more » « less
Award ID(s):
1723081
NSF-PAR ID:
10426663
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
DOI PREFIX: 10.1029
Date Published:
Journal Name:
Journal of Geophysical Research: Solid Earth
Volume:
128
Issue:
6
ISSN:
2169-9313
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The spectra of earthquake waveforms can provide important insight into rupture processes, but the analysis and interpretation of these spectra is rarely straightforward. Here we develop a Bayesian framework that embraces the inherent data and modeling uncertainties of spectral analysis to infer key source properties. The method uses a spectral ratio approach to correct the observedS‐wave spectra of nearby earthquakes for path and site attenuation. The objective then is to solve for a joint posterior probability distribution of three source parameters—seismic moment, corner frequency, and high‐frequency falloff rate—for each earthquake in the sequence, as well as a measure of rupture directivity for select target events with good azimuthal station coverage. While computationally intensive, this technique provides a quantitative understanding of parameter tradeoffs and uncertainties and allows one to impose physical constraints through prior distributions on all source parameters, which guide the inversion when data is limited. We demonstrate the method by analyzing in detail the source properties of 14 different target events of magnitude M5 in southern California that span a wide range of tectonic regimes and fault systems. These prominent earthquakes, while comparable in size, exhibit marked diversity in their source properties and directivity, with clear spatial patterns, depth‐dependent trends, and a preference for unilateral directivity. These coherent spatial variations source properties suggest that regional differences in tectonic setting, hypocentral depth or fault zone characteristics may drive variability in rupture processes, with important implications for our understanding of earthquake physics and its relation to hazard.

     
    more » « less
  2. SUMMARY

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility of designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

     
    more » « less
  3. SUMMARY

    The spectral element method is currently the method of choice for computing accurate synthetic seismic wavefields in realistic 3-D earth models at the global scale. However, it requires significantly more computational time, compared to normal mode-based approximate methods. Source stacking, whereby multiple earthquake sources are aligned on their origin time and simultaneously triggered, can reduce the computational costs by several orders of magnitude. We present the results of synthetic tests performed on a realistic radially anisotropic 3-D model, slightly modified from model SEMUCB-WM1 with three component synthetic waveform ‘data’ for a duration of 10 000 s, and filtered at periods longer than 60 s, for a set of 273 events and 515 stations. We consider two definitions of the misfit function, one based on the stacked records at individual stations and another based on station-pair cross-correlations of the stacked records. The inverse step is performed using a Gauss–Newton approach where the gradient and Hessian are computed using normal mode perturbation theory. We investigate the retrieval of radially anisotropic long wavelength structure in the upper mantle in the depth range 100–800 km, after fixing the crust and uppermost mantle structure constrained by fundamental mode Love and Rayleigh wave dispersion data. The results show good performance using both definitions of the misfit function, even in the presence of realistic noise, with degraded amplitudes of lateral variations in the anisotropic parameter ξ. Interestingly, we show that we can retrieve the long wavelength structure in the upper mantle, when considering one or the other of three portions of the cross-correlation time series, corresponding to where we expect the energy from surface wave overtone, fundamental mode or a mixture of the two to be dominant, respectively. We also considered the issue of missing data, by randomly removing a successively larger proportion of the available synthetic data. We replace the missing data by synthetics computed in the current 3-D model using normal mode perturbation theory. The inversion results degrade with the proportion of missing data, especially for ξ, and we find that a data availability of 45 per cent or more leads to acceptable results. We also present a strategy for grouping events and stations to minimize the number of missing data in each group. This leads to an increased number of computations but can be significantly more efficient than conventional single-event-at-a-time inversion. We apply the grouping strategy to a real picking scenario, and show promising resolution capability despite the use of fewer waveforms and uneven ray path distribution. Source stacking approach can be used to rapidly obtain a starting 3-D model for more conventional full-waveform inversion at higher resolution, and to investigate assumptions made in the inversion, such as trade-offs between isotropic, anisotropic or anelastic structure, different model parametrizations or how crustal structure is accounted for.

     
    more » « less
  4. SUMMARY Knowledge of attenuation structure is important for understanding subsurface material properties. We have developed a double-difference seismic attenuation (DDQ) tomography method for high-resolution imaging of 3-D attenuation structure. Our method includes two main elements, the inversion of event-pair differential ${t^*}$ ($d{t^*}$) data and 3-D attenuation tomography with the $d{t^*}$ data. We developed a new spectral ratio method that jointly inverts spectral ratio data from pairs of events observed at a common set of stations to determine the $d{t^*}$ data. The spectral ratio method cancels out instrument and site response terms, resulting in more accurate $d{t^*}$ data compared to absolute ${t^*}$ from traditional methods using individual spectra. Synthetic tests show that the inversion of $d{t^*}$ data using our spectral ratio method is robust to the choice of source model and a moderate degree of noise. We modified an existing velocity tomography code so that it can invert $d{t^*}$ data for 3-D attenuation structure. We applied the new method to The Geyser geothermal field, California, which has vapour-dominated reservoirs and a long history of water injection. A new Qp model at The Geysers is determined using P-wave data of earthquakes in 2011, using our updated earthquake locations and Vp model. By taking advantage of more accurate $d{t^*}$ data and the cancellation of model uncertainties along the common paths outside of the source region, the DDQ tomography method achieves higher resolution, especially in the earthquake source regions, compared to the standard tomography method using ${t^*}$ data. This is validated by both the real and synthetic data tests. Our Qp and Vp models show consistent variations in a normal temperature reservoir that can be explained by variations in fracturing, permeability and fluid saturation and/or steam pressure. A prominent low-Qp and Vp zone associated with very active seismicity is imaged within a high temperature reservoir at depths below 2 km. This anomalous zone is likely partially saturated with injected fluids. 
    more » « less
  5. SUMMARY

    Cross-correlations of ambient seismic noise are widely used for seismic velocity imaging, monitoring and ground motion analyses. A typical step in analysing noise cross-correlation functions (NCFs) is stacking short-term NCFs over longer time periods to increase the signal quality. Spurious NCFs could contaminate the stack, degrade its quality and limit its use. Many methods have been developed to improve the stacking of coherent waveforms, including earthquake waveforms, receiver functions and NCFs. This study systematically evaluates and compares the performance of eight stacking methods, including arithmetic mean or linear stacking, robust stacking, selective stacking, cluster stacking, phase-weighted stacking, time–frequency phase-weighted stacking, Nth-root stacking and averaging after applying an adaptive covariance filter. Our results demonstrate that, in most cases, all methods can retrieve clear ballistic or first arrivals. However, they yield significant differences in preserving the phase and amplitude information. This study provides a practical guide for choosing the optimal stacking method for specific research applications in ambient noise seismology. We evaluate the performance using multiple onshore and offshore seismic arrays in the Pacific Northwest region. We compare these stacking methods for NCFs calculated from raw ambient noise (referred to as Raw NCFs) and from ambient noise normalized using a one-bit clipping time normalization method (referred to as One-bit NCFs). We evaluate six metrics, including signal-to-noise ratios, phase dispersion images, convergence rate, temporal changes in the ballistic and coda waves, relative amplitude decays with distance and computational time. We show that robust stacking is the best choice for all applications (velocity tomography, monitoring and attenuation studies) using Raw NCFs. For applications using One-bit NCFs, all methods but phase-weighted and Nth-root stacking are good choices for seismic velocity tomography. Linear, robust and selective stacking methods are all equally appropriate choices when using One-bit NCFs for monitoring applications. For applications relying on accurate relative amplitudes, the linear, robust, selective and cluster stacking methods all perform well with One-bit NCFs. The evaluations in this study can be generalized to a broad range of time-series analysis that utilizes data coherence to perform ensemble stacking. Another contribution of this study is the accompanying open-source software package, StackMaster, which can be used for general purposes of time-series stacking.

     
    more » « less