skip to main content


Search for: All records

Award ID contains: 1723033

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. SUMMARY

    We propose a theoretical modelling framework for earthquake occurrence and clustering based on a family of invariant Galton–Watson (IGW) stochastic branching processes. The IGW process is a rigorously defined approximation to imprecisely observed or incorrectly estimated earthquake clusters modelled by Galton–Watson branching processes, including the Epidemic Type Aftershock Sequence (ETAS) model. The theory of IGW processes yields explicit distributions for multiple cluster attributes, including magnitude-dependent and magnitude-independent offspring number, cluster size and cluster combinatorial depth. Analysis of the observed seismicity in southern California demonstrates that the IGW model provides a close fit to the observed earthquake clusters. The estimated IGW parameters and derived statistics are robust with respect to the catalogue lower cut-off magnitude. The proposed model facilitates analyses of multiple quantities of seismicity based on self-similar tree attributes, and may be used to assess the proximity of seismicity to criticality.

     
    more » « less
  2. Abstract

    We introduce an algorithm for declustering earthquake catalogs based on the nearest‐neighbor analysis of seismicity. The algorithm discriminates between background and clustered events by random thinning that removes events according to a space‐varying threshold. The threshold is estimated using randomized‐reshuffled catalogs that are stationary, have independent space and time components, and preserve the space distribution of the original catalog. Analysis of catalog produced by the Epidemic Type Aftershock Sequence model demonstrates that the algorithm correctly classifies over 80% of background and clustered events, correctly reconstructs the stationary and space‐dependent background intensity, and shows high stability with respect to random realizations (over 75% of events have the same estimated type in over 90% of random realizations). The declustering algorithm is applied to the global Northern California Earthquake Data Center catalog with magnitudesm≥ 4 during 2000–2015; a Southern California catalog withm≥ 2.5, 3.5 during 1981–2017; an area around the 1992 Landers rupture zone withm≥ 0.0 during 1981–2015; and the Parkfield segment of San Andreas fault withm≥ 1.0 during 1984–2014. The null hypotheses of stationarity and space‐time independence are not rejected by several tests applied to the estimated background events of the global and Southern California catalogs with magnitude ranges Δm< 4. However, both hypotheses are rejected for catalogs with larger range of magnitudes Δm> 4. The deviations from the nulls are mainly due to local temporal fluctuations of seismicity and activity switching among subregions; they can be traced back to the original catalogs and represent genuine features of background seismicity.

     
    more » « less
  3. Abstract Clustering is a fundamental feature of earthquakes that impacts basic and applied analyses of seismicity. Events included in the existing short-duration instrumental catalogs are concentrated strongly within a very small fraction of the space–time volume, which is highly amplified by activity associated with the largest recorded events. The earthquakes that are included in instrumental catalogs are unlikely to be fully representative of the long-term behavior of regional seismicity. We illustrate this and other aspects of space–time earthquake clustering, and propose a quantitative clustering measure based on the receiver operating characteristic diagram. The proposed approach allows eliminating effects of marginal space and time inhomogeneities related to the geometry of the fault network and regionwide changes in earthquake rates, and quantifying coupled space–time variations that include aftershocks, swarms, and other forms of clusters. The proposed measure is used to quantify and compare earthquake clustering in southern California, western United States, central and eastern United States, Alaska, Japan, and epidemic-type aftershock sequence model results. All examined cases show a high degree of coupled space–time clustering, with the marginal space clustering dominating the marginal time clustering. Declustering earthquake catalogs can help clarify long-term aspects of regional seismicity and increase the signal-to-noise ratio of effects that are subtler than the strong clustering signatures. We illustrate how the high coupled space–time clustering can be decreased or removed using a data-adaptive parsimonious nearest-neighbor declustering approach, and emphasize basic unresolved issues on the proper outcome and quality metrics of declustering. At present, declustering remains an exploratory tool, rather than a rigorous optimization problem, and selecting an appropriate declustering method should depend on the data and problem at hand. 
    more » « less
  4. null (Ed.)
    SUMMARY We examine localization processes of low magnitude seismicity in relation to the occurrence of large earthquakes using three complementary analyses: (i) estimated production of rock damage by background events, (ii) evolving occupied fractional area of background seismicity and (iii) progressive coalescence of individual earthquakes into clusters. The different techniques provide information on different time scales and on the spatial extent of weakened damaged regions. Techniques (i) and (ii) use declustered catalogues to avoid the occasional strong fluctuations associated with aftershock sequences, while technique (iii) examines developing clusters in entire catalogue data. We analyse primarily earthquakes around large faults that are locked in the interseismic periods, and examine also as a contrasting example seismicity from the creeping Parkfield section of the San Andreas fault. Results of analysis (i) show that the M > 7 Landers 1992, Hector Mine 1999, El Mayor-Cucapah 2010 and Ridgecrest 2019 main shocks in Southern and Baja California were preceded in the previous decades by generation of rock damage around the eventual rupture zones. Analysis (ii) reveals localization (reduced fractional area) 2–3 yr before these main shocks and before the M > 7 Düzce 1999 earthquake in Turkey. Results with technique (iii) indicate that individual events tend to coalesce rapidly to clusters in the final 1–2 yr before the main shocks. Corresponding analyses of data from the Parkfield region show opposite delocalization patterns and decreasing clustering before the 2004 M6 earthquake. Continuing studies with these techniques, combined with analysis of geodetic data and insights from laboratory experiments and model simulations, might improve the ability to track preparation processes leading to large earthquakes. 
    more » « less