skip to main content


Title: Global centroid moment tensor solutions in a heterogeneous earth: the CMT3D catalogue
SUMMARY

For over 40 yr, the global centroid-moment tensor (GCMT) project has determined location and source parameters for globally recorded earthquakes larger than magnitude 5.0. The GCMT database remains a trusted staple for the geophysical community. Its point-source moment-tensor solutions are the result of inversions that model long-period observed seismic waveforms via normal-mode summation for a 1-D reference earth model, augmented by path corrections to capture 3-D variations in surface wave phase speeds, and to account for crustal structure. While this methodology remains essentially unchanged for the ongoing GCMT catalogue, source inversions based on waveform modelling in low-resolution 3-D earth models have revealed small but persistent biases in the standard modelling approach. Keeping pace with the increased capacity and demands of global tomography requires a revised catalogue of centroid-moment tensors (CMT), automatically and reproducibly computed using Green's functions from a state-of-the-art 3-D earth model. In this paper, we modify the current procedure for the full-waveform inversion of seismic traces for the six moment-tensor parameters, centroid latitude, longitude, depth and centroid time of global earthquakes. We take the GCMT solutions as a point of departure but update them to account for the effects of a heterogeneous earth, using the global 3-D wave speed model GLAD-M25. We generate synthetic seismograms from Green's functions computed by the spectral-element method in the 3-D model, select observed seismic data and remove their instrument response, process synthetic and observed data, select segments of observed and synthetic data based on similarity, and invert for new model parameters of the earthquake’s centroid location, time and moment tensor. The events in our new, preliminary database containing 9382 global event solutions, called CMT3D for ‘3-D centroid-moment tensors’, are on average 4 km shallower, about 1 s earlier, about 5 per cent larger in scalar moment, and more double-couple in nature than in the GCMT catalogue. We discuss in detail the geographical and statistical distributions of the updated solutions, and place them in the context of earlier work. We plan to disseminate our CMT3D solutions via the online ShakeMovie platform.

 
more » « less
NSF-PAR ID:
10370369
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Geophysical Journal International
Volume:
231
Issue:
3
ISSN:
0956-540X
Format(s):
Medium: X Size: p. 1727-1738
Size(s):
["p. 1727-1738"]
Sponsoring Org:
National Science Foundation
More Like this
  1. SUMMARY

    Precisely constraining the source parameters of large earthquakes is one of the primary objectives of seismology. However, the quality of the results relies on the quality of synthetic earth response. Although earth structure is laterally heterogeneous, particularly at shallow depth, most earthquake source studies at the global scale rely on the Green's functions calculated with radially symmetric (1-D) earth structure. To avoid the impact of inaccurate Green's functions, these conventional source studies use a limited set of seismic phases, such as long-period seismic waves, broad-band P and S waves in teleseismic distances (30° < ∆ < 90°), and strong ground motion records at close-fault stations. The enriched information embedded in the broad-band seismograms recorded by global and regional networks is largely ignored, limiting the spatiotemporal resolution. Here we calculate 3-D strain Green's functions at 30 GSN stations for source regions of 9 selected global earthquakes and one earthquake-prone area (California), with frequency up to 67 mHz (15 s), using SPECFEM3D_GLOBE and the reciprocity theorem. The 3-D SEM mesh model is composed of mantle model S40RTS, crustal model CRUST2.0 and surface topography ETOPO2. We surround each target event with grids in horizontal spacing of 5 km and vertical spacing of 2.0–3.0 km, allowing us to investigate not only the main shock but also the background seismicity. In total, the response at over 210 000 source points is calculated in simulation. The number of earthquakes, including different focal mechanisms, centroid depth range and tectonic background, could further increase without additional computational cost if they were properly selected to avoid overloading individual CPUs. The storage requirement can be reduced by two orders of magnitude if the output strain Green's functions are stored for periods over 15 s. We quantitatively evaluate the quality of these 3-D synthetic seismograms, which are frequency and phase dependent, for each source region using nearby aftershocks, before using them to constrain the focal mechanisms and slip distribution. Case studies show that using a 3-D earth model significantly improves the waveform similarity, agreement in amplitude and arrival time of seismic phases with the observations. The limitations of current 3-D models are still notable, dependent on seismic phases and frequency range. The 3-D synthetic seismograms cannot well match the high frequency (>40 mHz) S wave and (>20 mHz) Rayleigh wave yet. Though the mean time-shifts are close to zero, the standard deviations are notable. Careful calibration using the records of nearby better located earthquakes is still recommended to take full advantage of better waveform similarity due to the use of 3-D models. Our results indicate that it is now feasible to systematically study global large earthquakes using full 3-D earth response in a global scale.

     
    more » « less
  2. SUMMARY

    The seismic quality factor (Q) of the Earth’s mantle is of great importance for the understanding of the physical and chemical properties that control mantle anelasticity. The radial structure of the Earth’s Q is less well resolved compared to its wave speed structure, and large discrepancies exist among global 1-D Q models. In this study, we build a global data set of amplitude measurements of S, SS, SSS and SSSS waves using earthquakes that occurred between 2009 and 2017 with moment magnitudes ranging from 6.5 to 8.0. Synthetic seismograms for those events are computed in a 1-D reference model PREM, and amplitude ratios between observed and synthetic seismograms are calculated in the frequency domain by spectra division, with measurement windows determined based on visual inspection of seismograms. We simulate wave propagation in a global velocity model S40RTS based on SPECFEM3D and show that the average amplitude ratio as a function of epicentral distance is not sensitive to 3-D focusing and defocusing for the source–receiver configuration of the data set. This data set includes about 5500 S and SS measurements that are not affected by mantle transition zone triplications (multiple ray paths), and those measurements are applied in linear inversions to obtain a preliminary 1-D Q model QMSI. This model reveals a high Q region in the uppermost lower mantle. While model QMSI improves the overall datafit of the entire data set, it does not fully explain SS amplitudes at short epicentral distances or the amplitudes of the SSS and SSSS waves. Using forward modelling, we modify the 1-D model QMSI iteratively to reduce the overall amplitude misfit of the entire data set. The final Q model QMSF requires a stronger and thicker high Q region at depths between 600 and 900 km. This anelastic structure indicates possible viscosity layering in the mid mantle.

     
    more » « less
  3. SUMMARY

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility of designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

     
    more » « less
  4. SUMMARY

    Earth structure beneath the Antarctic exerts an important control on the evolution of the ice sheet. A range of geological and geophysical data sets indicate that this structure is complex, with the western sector characterized by a lithosphere of thickness ∼50–100 km and viscosities within the upper mantle that vary by 2–3 orders of magnitude. Recent analyses of uplift rates estimated using Global Navigation Satellite System (GNSS) observations have inferred 1-D viscosity profiles below West Antarctica discretized into a small set of layers within the upper mantle using forward modelling of glacial isostatic adjustment (GIA). It remains unclear, however, what these 1-D viscosity models represent in an area with complex 3-D mantle structure, and over what geographic length-scale they are applicable. Here, we explore this issue by repeating the same modelling procedure but applied to synthetic uplift rates computed using a realistic model of 3-D viscoelastic Earth structure inferred from seismic tomographic imaging of the region, a finite volume treatment of GIA that captures this complexity, and a loading history of Antarctic ice mass changes inferred over the period 1992–2017. We find differences of up to an order of magnitude between the best-fitting 1-D inferences and regionally averaged depth profiles through the 3-D viscosity field used to generate the synthetics. Additional calculations suggest that this level of disagreement is not systematically improved if one increases the number of observation sites adopted in the analysis. Moreover, the 1-D models inferred from such a procedure are non-unique, that is a broad range of viscosity profiles fit the synthetic uplift rates equally well as a consequence, in part, of correlations between the viscosity values within each layer. While the uplift rate at each GNSS site is sensitive to a unique subspace of the complex, 3-D viscosity field, additional analyses based on rates from subsets of proximal sites showed no consistent improvement in the level of bias in the 1-D inference. We also conclude that the broad, regional-scale uplift field generated with the 3-D model is poorly represented by a prediction based on the best-fitting 1-D Earth model. Future work analysing GNSS data should be extended to include horizontal rates and move towards inversions for 3-D structure that reflect the intrinsic 3-D resolving power of the data.

     
    more » « less
  5. Abstract

    We develop a semiautomated method for estimating with second seismic moments the directivity, rupture area, duration, and centroid velocity of earthquakes. The method is applied to 41 southern California earthquakes with magnitude in the range 3.5–5.2 and provides stable results for 28 events. Apparent source time functions (ASTFs) ofPandSphases are derived using deconvolution with three stacked empirical Green's functions (seGf). The use of seGf suppresses nongeneric source effects, improves the focal mechanism correspondence to the analyzed earthquakes, and typically allows inclusion of 5 to 15 more ASTFs compared with analysis using a single eGf. Most analyzed earthquakes in the Trifurcation area of the San Jacinto Fault have directivities toward the northwest, while events around Cajon Pass and San Gabriel Mountain tend to propagate toward the southeast. These results are generally consistent with predictions for dynamic rupture on bimaterial interfaces associated with the imaged velocity contrasts in the area. The second moment inversions also provide constraints on the upper and lower bounds of rupture areas in our data set. Stress drops and uncertainties are estimated for elliptical ruptures using the derived characteristic rupture length and width. The semiautomated second moment method with seGfs can be used for routine application to moderate earthquakes in locations with good station coverage.

     
    more » « less