skip to main content

Title: Global centroid moment tensor solutions in a heterogeneous earth: the CMT3D catalogue

For over 40 yr, the global centroid-moment tensor (GCMT) project has determined location and source parameters for globally recorded earthquakes larger than magnitude 5.0. The GCMT database remains a trusted staple for the geophysical community. Its point-source moment-tensor solutions are the result of inversions that model long-period observed seismic waveforms via normal-mode summation for a 1-D reference earth model, augmented by path corrections to capture 3-D variations in surface wave phase speeds, and to account for crustal structure. While this methodology remains essentially unchanged for the ongoing GCMT catalogue, source inversions based on waveform modelling in low-resolution 3-D earth models have revealed small but persistent biases in the standard modelling approach. Keeping pace with the increased capacity and demands of global tomography requires a revised catalogue of centroid-moment tensors (CMT), automatically and reproducibly computed using Green's functions from a state-of-the-art 3-D earth model. In this paper, we modify the current procedure for the full-waveform inversion of seismic traces for the six moment-tensor parameters, centroid latitude, longitude, depth and centroid time of global earthquakes. We take the GCMT solutions as a point of departure but update them to account for the effects of a heterogeneous earth, using the global 3-D wave speed model GLAD-M25. We generate synthetic seismograms from Green's functions computed by the spectral-element method in the 3-D model, select observed seismic data and remove their instrument response, process synthetic and observed data, select segments of observed and synthetic data based on similarity, and invert for new model parameters of the earthquake’s centroid location, time and moment tensor. The events in our new, preliminary database containing 9382 global event solutions, called CMT3D for ‘3-D centroid-moment tensors’, are on average 4 km shallower, about 1 s earlier, about 5 per cent larger in scalar moment, and more double-couple in nature than in the GCMT catalogue. We discuss in detail the geographical and statistical distributions of the updated solutions, and place them in the context of earlier work. We plan to disseminate our CMT3D solutions via the online ShakeMovie platform.

more » « less
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Geophysical Journal International
Medium: X Size: p. 1727-1738
["p. 1727-1738"]
Sponsoring Org:
National Science Foundation
More Like this

    Precisely constraining the source parameters of large earthquakes is one of the primary objectives of seismology. However, the quality of the results relies on the quality of synthetic earth response. Although earth structure is laterally heterogeneous, particularly at shallow depth, most earthquake source studies at the global scale rely on the Green's functions calculated with radially symmetric (1-D) earth structure. To avoid the impact of inaccurate Green's functions, these conventional source studies use a limited set of seismic phases, such as long-period seismic waves, broad-band P and S waves in teleseismic distances (30° < ∆ < 90°), and strong ground motion records at close-fault stations. The enriched information embedded in the broad-band seismograms recorded by global and regional networks is largely ignored, limiting the spatiotemporal resolution. Here we calculate 3-D strain Green's functions at 30 GSN stations for source regions of 9 selected global earthquakes and one earthquake-prone area (California), with frequency up to 67 mHz (15 s), using SPECFEM3D_GLOBE and the reciprocity theorem. The 3-D SEM mesh model is composed of mantle model S40RTS, crustal model CRUST2.0 and surface topography ETOPO2. We surround each target event with grids in horizontal spacing of 5 km and vertical spacing of 2.0–3.0 km, allowing us to investigate not only the main shock but also the background seismicity. In total, the response at over 210 000 source points is calculated in simulation. The number of earthquakes, including different focal mechanisms, centroid depth range and tectonic background, could further increase without additional computational cost if they were properly selected to avoid overloading individual CPUs. The storage requirement can be reduced by two orders of magnitude if the output strain Green's functions are stored for periods over 15 s. We quantitatively evaluate the quality of these 3-D synthetic seismograms, which are frequency and phase dependent, for each source region using nearby aftershocks, before using them to constrain the focal mechanisms and slip distribution. Case studies show that using a 3-D earth model significantly improves the waveform similarity, agreement in amplitude and arrival time of seismic phases with the observations. The limitations of current 3-D models are still notable, dependent on seismic phases and frequency range. The 3-D synthetic seismograms cannot well match the high frequency (>40 mHz) S wave and (>20 mHz) Rayleigh wave yet. Though the mean time-shifts are close to zero, the standard deviations are notable. Careful calibration using the records of nearby better located earthquakes is still recommended to take full advantage of better waveform similarity due to the use of 3-D models. Our results indicate that it is now feasible to systematically study global large earthquakes using full 3-D earth response in a global scale.

    more » « less

    The seismic quality factor (Q) of the Earth’s mantle is of great importance for the understanding of the physical and chemical properties that control mantle anelasticity. The radial structure of the Earth’s Q is less well resolved compared to its wave speed structure, and large discrepancies exist among global 1-D Q models. In this study, we build a global data set of amplitude measurements of S, SS, SSS and SSSS waves using earthquakes that occurred between 2009 and 2017 with moment magnitudes ranging from 6.5 to 8.0. Synthetic seismograms for those events are computed in a 1-D reference model PREM, and amplitude ratios between observed and synthetic seismograms are calculated in the frequency domain by spectra division, with measurement windows determined based on visual inspection of seismograms. We simulate wave propagation in a global velocity model S40RTS based on SPECFEM3D and show that the average amplitude ratio as a function of epicentral distance is not sensitive to 3-D focusing and defocusing for the source–receiver configuration of the data set. This data set includes about 5500 S and SS measurements that are not affected by mantle transition zone triplications (multiple ray paths), and those measurements are applied in linear inversions to obtain a preliminary 1-D Q model QMSI. This model reveals a high Q region in the uppermost lower mantle. While model QMSI improves the overall datafit of the entire data set, it does not fully explain SS amplitudes at short epicentral distances or the amplitudes of the SSS and SSSS waves. Using forward modelling, we modify the 1-D model QMSI iteratively to reduce the overall amplitude misfit of the entire data set. The final Q model QMSF requires a stronger and thicker high Q region at depths between 600 and 900 km. This anelastic structure indicates possible viscosity layering in the mid mantle.

    more » « less

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility of designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

    more » « less

    Earth structure beneath the Antarctic exerts an important control on the evolution of the ice sheet. A range of geological and geophysical data sets indicate that this structure is complex, with the western sector characterized by a lithosphere of thickness ∼50–100 km and viscosities within the upper mantle that vary by 2–3 orders of magnitude. Recent analyses of uplift rates estimated using Global Navigation Satellite System (GNSS) observations have inferred 1-D viscosity profiles below West Antarctica discretized into a small set of layers within the upper mantle using forward modelling of glacial isostatic adjustment (GIA). It remains unclear, however, what these 1-D viscosity models represent in an area with complex 3-D mantle structure, and over what geographic length-scale they are applicable. Here, we explore this issue by repeating the same modelling procedure but applied to synthetic uplift rates computed using a realistic model of 3-D viscoelastic Earth structure inferred from seismic tomographic imaging of the region, a finite volume treatment of GIA that captures this complexity, and a loading history of Antarctic ice mass changes inferred over the period 1992–2017. We find differences of up to an order of magnitude between the best-fitting 1-D inferences and regionally averaged depth profiles through the 3-D viscosity field used to generate the synthetics. Additional calculations suggest that this level of disagreement is not systematically improved if one increases the number of observation sites adopted in the analysis. Moreover, the 1-D models inferred from such a procedure are non-unique, that is a broad range of viscosity profiles fit the synthetic uplift rates equally well as a consequence, in part, of correlations between the viscosity values within each layer. While the uplift rate at each GNSS site is sensitive to a unique subspace of the complex, 3-D viscosity field, additional analyses based on rates from subsets of proximal sites showed no consistent improvement in the level of bias in the 1-D inference. We also conclude that the broad, regional-scale uplift field generated with the 3-D model is poorly represented by a prediction based on the best-fitting 1-D Earth model. Future work analysing GNSS data should be extended to include horizontal rates and move towards inversions for 3-D structure that reflect the intrinsic 3-D resolving power of the data.

    more » « less

    Tsunami generation by offshore earthquakes is a problem of scientific interest and practical relevance, and one that requires numerical modelling for data interpretation and hazard assessment. Most numerical models utilize two-step methods with one-way coupling between separate earthquake and tsunami models, based on approximations that might limit the applicability and accuracy of the resulting solution. In particular, standard methods focus exclusively on tsunami wave modelling, neglecting larger amplitude ocean acoustic and seismic waves that are superimposed on tsunami waves in the source region. In this study, we compare four earthquake-tsunami modelling methods. We identify dimensionless parameters to quantitatively approximate dominant wave modes in the earthquake-tsunami source region, highlighting how the method assumptions affect the results and discuss which methods are appropriate for various applications such as interpretation of data from offshore instruments in the source region. Most methods couple a 3-D solid earth model, which provides the seismic wavefield or at least the static elastic displacements, with a 2-D depth-averaged shallow water tsunami model. Assuming the ocean is incompressible and tsunami propagation is negligible over the earthquake duration leads to the instantaneous source method, which equates the static earthquake seafloor uplift with the initial tsunami sea surface height. For longer duration earthquakes, it is appropriate to follow the time-dependent source method, which uses time-dependent earthquake seafloor velocity as a forcing term in the tsunami mass balance. Neither method captures ocean acoustic or seismic waves, motivating more advanced methods that capture the full wavefield. The superposition method of Saito et al. solves the 3-D elastic and acoustic equations to model the seismic wavefield and response of a compressible ocean without gravity. Then, changes in sea surface height from the zero-gravity solution are used as a forcing term in a separate tsunami simulation, typically run with a shallow water solver. A superposition of the earthquake and tsunami solutions provides an approximation to the complete wavefield. This method is algorithmically a two-step method. The complete wavefield is captured in the fully coupled method, which utilizes a coupled solid Earth and compressible ocean model with gravity. The fully coupled method, recently incorporated into the 3-D open-source code SeisSol, simultaneously solves earthquake rupture, seismic waves and ocean response (including gravity). We show that the superposition method emerges as an approximation to the fully coupled method subject to often well-justified assumptions. Furthermore, using the fully coupled method, we examine how the source spectrum and ocean depth influence the expression of oceanic Rayleigh waves. Understanding the range of validity of each method, as well as its computational expense, facilitates the selection of modelling methods for the accurate assessment of earthquake and tsunami hazards and the interpretation of data from offshore instruments.

    more » « less