skip to main content

Title: Source parameters inversion of the global large earthquakes using 3-D SEM Green's functions: strain Green's function calculation and validation

Precisely constraining the source parameters of large earthquakes is one of the primary objectives of seismology. However, the quality of the results relies on the quality of synthetic earth response. Although earth structure is laterally heterogeneous, particularly at shallow depth, most earthquake source studies at the global scale rely on the Green's functions calculated with radially symmetric (1-D) earth structure. To avoid the impact of inaccurate Green's functions, these conventional source studies use a limited set of seismic phases, such as long-period seismic waves, broad-band P and S waves in teleseismic distances (30° < ∆ < 90°), and strong ground motion records at close-fault stations. The enriched information embedded in the broad-band seismograms recorded by global and regional networks is largely ignored, limiting the spatiotemporal resolution. Here we calculate 3-D strain Green's functions at 30 GSN stations for source regions of 9 selected global earthquakes and one earthquake-prone area (California), with frequency up to 67 mHz (15 s), using SPECFEM3D_GLOBE and the reciprocity theorem. The 3-D SEM mesh model is composed of mantle model S40RTS, crustal model CRUST2.0 and surface topography ETOPO2. We surround each target event with grids in horizontal spacing of 5 km and vertical spacing of 2.0–3.0 km, allowing us more » to investigate not only the main shock but also the background seismicity. In total, the response at over 210 000 source points is calculated in simulation. The number of earthquakes, including different focal mechanisms, centroid depth range and tectonic background, could further increase without additional computational cost if they were properly selected to avoid overloading individual CPUs. The storage requirement can be reduced by two orders of magnitude if the output strain Green's functions are stored for periods over 15 s. We quantitatively evaluate the quality of these 3-D synthetic seismograms, which are frequency and phase dependent, for each source region using nearby aftershocks, before using them to constrain the focal mechanisms and slip distribution. Case studies show that using a 3-D earth model significantly improves the waveform similarity, agreement in amplitude and arrival time of seismic phases with the observations. The limitations of current 3-D models are still notable, dependent on seismic phases and frequency range. The 3-D synthetic seismograms cannot well match the high frequency (>40 mHz) S wave and (>20 mHz) Rayleigh wave yet. Though the mean time-shifts are close to zero, the standard deviations are notable. Careful calibration using the records of nearby better located earthquakes is still recommended to take full advantage of better waveform similarity due to the use of 3-D models. Our results indicate that it is now feasible to systematically study global large earthquakes using full 3-D earth response in a global scale.

« less
; ; ; ;
Publication Date:
Journal Name:
Geophysical Journal International
Page Range or eLocation-ID:
p. 1546-1564
Oxford University Press
Sponsoring Org:
National Science Foundation
More Like this

    For over 40 yr, the global centroid-moment tensor (GCMT) project has determined location and source parameters for globally recorded earthquakes larger than magnitude 5.0. The GCMT database remains a trusted staple for the geophysical community. Its point-source moment-tensor solutions are the result of inversions that model long-period observed seismic waveforms via normal-mode summation for a 1-D reference earth model, augmented by path corrections to capture 3-D variations in surface wave phase speeds, and to account for crustal structure. While this methodology remains essentially unchanged for the ongoing GCMT catalogue, source inversions based on waveform modelling in low-resolution 3-D earth models have revealed small but persistent biases in the standard modelling approach. Keeping pace with the increased capacity and demands of global tomography requires a revised catalogue of centroid-moment tensors (CMT), automatically and reproducibly computed using Green's functions from a state-of-the-art 3-D earth model. In this paper, we modify the current procedure for the full-waveform inversion of seismic traces for the six moment-tensor parameters, centroid latitude, longitude, depth and centroid time of global earthquakes. We take the GCMT solutions as a point of departure but update them to account for the effects of a heterogeneous earth, using the global 3-Dmore »wave speed model GLAD-M25. We generate synthetic seismograms from Green's functions computed by the spectral-element method in the 3-D model, select observed seismic data and remove their instrument response, process synthetic and observed data, select segments of observed and synthetic data based on similarity, and invert for new model parameters of the earthquake’s centroid location, time and moment tensor. The events in our new, preliminary database containing 9382 global event solutions, called CMT3D for ‘3-D centroid-moment tensors’, are on average 4 km shallower, about 1 s earlier, about 5 per cent larger in scalar moment, and more double-couple in nature than in the GCMT catalogue. We discuss in detail the geographical and statistical distributions of the updated solutions, and place them in the context of earlier work. We plan to disseminate our CMT3D solutions via the online ShakeMovie platform.

    « less

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility ofmore »designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

    « less

    Crustal seismic velocity models provide essential information for many applications including earthquake source properties, simulations of ground motion and related derivative products. We present a systematic workflow for assessing the accuracy of velocity models with full-waveform simulations. The framework is applied to four regional seismic velocity models for southern California: CVM-H15.11, CVM-S4.26, CVM-S4.26.M01 that includes a shallow geotechnical layer, and the model of Berg et al. For each model, we perform 3-D viscoelastic wave propagation simulations for 48 virtual seismic noise sources (down to 2 s) and 44 moderate-magnitude earthquakes (down to 2 s generally and 0.5 s for some cases) assuming a minimum shear wave velocity of 200 m s–1. The synthetic waveforms are compared with observations associated with both earthquake records and noise cross-correlation data sets. We measure, at multiple period bands for well-isolated seismic phases, traveltime delays and normalized zero-lag cross-correlation coefficients between the synthetic and observed data. The obtained measurements are summarized using the mean absolute derivation of time delay and the mean correlation coefficient. These two metrics provide reliable statistical representations of model quality with consistent results in all data sets. In addition to assessing the overall (average) performance of different models in the entire study area, wemore »examine spatial variations of the models’ quality. All examined models show good phase and waveform agreements for surface waves at periods longer than 5 s, and discrepancies at shorter periods reflecting small-scale heterogeneities and near-surface structures. The model performing best overall is CVM-S4.26.M01. The largest misfits for both body and surface waves are in basin structures and around large fault zones. Inaccuracies generated in these areas may affect tomography and model simulation results at other regions. The seismic velocity models for southern California can be improved by adding better resolved structural representations of the shallow crust and volumes around the main faults.

    « less

    The seismic quality factor (Q) of the Earth’s mantle is of great importance for the understanding of the physical and chemical properties that control mantle anelasticity. The radial structure of the Earth’s Q is less well resolved compared to its wave speed structure, and large discrepancies exist among global 1-D Q models. In this study, we build a global data set of amplitude measurements of S, SS, SSS and SSSS waves using earthquakes that occurred between 2009 and 2017 with moment magnitudes ranging from 6.5 to 8.0. Synthetic seismograms for those events are computed in a 1-D reference model PREM, and amplitude ratios between observed and synthetic seismograms are calculated in the frequency domain by spectra division, with measurement windows determined based on visual inspection of seismograms. We simulate wave propagation in a global velocity model S40RTS based on SPECFEM3D and show that the average amplitude ratio as a function of epicentral distance is not sensitive to 3-D focusing and defocusing for the source–receiver configuration of the data set. This data set includes about 5500 S and SS measurements that are not affected by mantle transition zone triplications (multiple ray paths), and those measurements are applied in linear inversions to obtainmore »a preliminary 1-D Q model QMSI. This model reveals a high Q region in the uppermost lower mantle. While model QMSI improves the overall datafit of the entire data set, it does not fully explain SS amplitudes at short epicentral distances or the amplitudes of the SSS and SSSS waves. Using forward modelling, we modify the 1-D model QMSI iteratively to reduce the overall amplitude misfit of the entire data set. The final Q model QMSF requires a stronger and thicker high Q region at depths between 600 and 900 km. This anelastic structure indicates possible viscosity layering in the mid mantle.

    « less
  5. Abstract Seismograms are convolution results between seismic sources and the media that seismic waves propagate through, and, therefore, the primary observations for studying seismic source parameters and the Earth interior. The routine earthquake location and travel-time tomography rely on accurate seismic phase picks (e.g., P and S arrivals). As data increase, reliable automated seismic phase-picking methods are needed to analyze data and provide timely earthquake information. However, most traditional autopickers suffer from low signal-to-noise ratio and usually require additional efforts to tune hyperparameters for each case. In this study, we proposed a deep-learning approach that adapted soft attention gates (AGs) and recurrent-residual convolution units (RRCUs) into the backbone U-Net for seismic phase picking. The attention mechanism was implemented to suppress responses from waveforms irrelevant to seismic phases, and the cooperating RRCUs further enhanced temporal connections of seismograms at multiple scales. We used numerous earthquake recordings in Taiwan with diverse focal mechanisms, wide depth, and magnitude distributions, to train and test our model. Setting the picking errors within 0.1 s and predicted probability over 0.5, the AG with recurrent-residual convolution unit (ARRU) phase picker achieved the F1 score of 98.62% for P arrivals and 95.16% for S arrivals, and picking rates weremore »96.72% for P waves and 90.07% for S waves. The ARRU phase picker also shown a great generalization capability, when handling unseen data. When applied the model trained with Taiwan data to the southern California data, the ARRU phase picker shown no cognitive downgrade. Comparing with manual picks, the arrival times determined by the ARRU phase picker shown a higher consistency, which had been evaluated by a set of repeating earthquakes. The arrival picks with less human error could benefit studies, such as earthquake location and seismic tomography.« less