The clustering of earthquake magnitudes is poorly understood compared to spatial and temporal clustering. Better understanding of correlations between earthquake magnitudes could provide insight into the mechanisms of earthquake rupture and fault interactions, and improve earthquake forecasting models. In this study we present a novel method of examining how seismic magnitude clustering occurs beyond the next event in the catalog and evolves with time and space between earthquake events. We first evaluate the clustering signature over time and space using double-difference located catalogs from Southern and Northern California. The strength of magnitude clustering appears to decay linearly with distance between events and logarithmically with time. The signature persists for longer distances (more than 50km) and times (several days) than previously thought, indicating that magnitude clustering is not driven solely by repeated rupture of an identical fault patch or Omori aftershock processes. The decay patterns occur in all magnitude ranges of the catalog and are demonstrated across multiple methodologies of study. These patterns are also shown to be present in laboratory rock fracture catalogs but absent in ETAS synthetic catalogs. Incorporating magnitude clustering decay patterns into earthquake forecasting models such as ETAS could improve their accuracy.
more »
« less
Responding to Media Inquiries about Earthquake Triggering Interactions
Abstract In the aftermath of a significant earthquake, seismologists are frequently asked questions by the media and public regarding possible interactions with recent prior events, including events at great distances away, along with prospects of larger events yet to come, both locally and remotely. For regions with substantial earthquake catalogs that provide information on the regional Gutenberg–Richter magnitude–frequency relationship, Omori temporal aftershock statistical behavior, and aftershock productivity parameters, probabilistic responses can be provided for likelihood of nearby future events of larger magnitude, as well as expected behavior of the overall aftershock sequence. However, such procedures generally involve uncertain extrapolations of parameterized equations to infrequent large events and do not provide answers to inquiries about long-range interactions, either retrospectively for interaction with prior remote large events or prospectively for interaction with future remote large events. Dynamic triggering that may be involved in such long-range interactions occurs, often with significant temporal delay, but is not well understood, making it difficult to respond to related inquiries. One approach to addressing such inquiries is to provide retrospective or prospective occurrence histories for large earthquakes based on global catalogs; while not providing quantitative understanding of any physical interaction, experience-based guidance on the (typically very low) chances of causal interactions can inform public understanding of likelihood of specific scenarios they are commonly very interested in.
more »
« less
- Award ID(s):
- 1802364
- PAR ID:
- 10330526
- Date Published:
- Journal Name:
- Seismological Research Letters
- Volume:
- 92
- Issue:
- 5
- ISSN:
- 0895-0695
- Page Range / eLocation ID:
- 3035 to 3045
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Recognizing earthquakes as foreshocks in real time would provide a valuable forecasting capability. In a recent study, Gulia and Wiemer (2019) proposed a traffic-light system that relies on abrupt changes in b-values relative to background values. The approach utilizes high-resolution earthquake catalogs to monitor localized regions around the largest events and distinguish foreshock sequences (reduced b-values) from aftershock sequences (increased b-values). The recent well-recorded earthquake foreshock sequences in Ridgecrest, California, and Maria Antonia, Puerto Rico, provide an opportunity to test the procedure. For Ridgecrest, our b-value time series indicates an elevated risk of a larger impending earthquake during the Mw 6.4 foreshock sequence and provides an ambiguous identification of the onset of the Mw 7.1 aftershock sequence. However, the exact result depends strongly on expert judgment. Monte Carlo sampling across a range of reasonable decisions most often results in ambiguous warning levels. In the case of the Puerto Rico sequence, we record significant drops in b-value prior to and following the largest event (Mw 6.4) in the sequence. The b-value has still not returned to background levels (12 February 2020). The Ridgecrest sequence roughly conforms to expectations; the Puerto Rico sequence will only do so if a larger event occurs in the future with an ensuing b-value increase. Any real-time implementation of this approach will require dense instrumentation, consistent (versioned) low completeness catalogs, well-calibrated maps of regionalized background b-values, systematic real-time catalog production, and robust decision making about the event source volumes to analyze.more » « less
-
Abstract Seismology is witnessing explosive growth in the diversity and scale of earthquake catalogs. A key motivation for this community effort is that more data should translate into better earthquake forecasts. Such improvements are yet to be seen. Here, we introduce the Recurrent Earthquake foreCAST (RECAST), a deep‐learning model based on recent developments in neural temporal point processes. The model enables access to a greater volume and diversity of earthquake observations, overcoming the theoretical and computational limitations of traditional approaches. We benchmark against a temporal Epidemic Type Aftershock Sequence model. Tests on synthetic data suggest that with a modest‐sized data set, RECAST accurately models earthquake‐like point processes directly from cataloged data. Tests on earthquake catalogs in Southern California indicate improved fit and forecast accuracy compared to our benchmark when the training set is sufficiently long (>104events). The basic components in RECAST add flexibility and scalability for earthquake forecasting without sacrificing performance.more » « less
-
Abstract Clustering of earthquake magnitudes is still actively debated, compared to well-established spatial and temporal clustering. Magnitude clustering is not currently implemented in earthquake forecasting but would be important if larger magnitude events are more likely to be followed by similar sized events. Here we show statistically significant magnitude clustering present in many different field and laboratory catalogs at a wide range of spatial scales (mm to 1000 km). It is universal in field catalogs across fault types and tectonic/induced settings, while laboratory results are unaffected by loading protocol or rock types and show temporal stability. The absence of clustering can be imposed by a global tensile stress, although clustering still occurs when isolating to triggered event pairs or spatial patches where shear stress dominates. Magnitude clustering is most prominent at short time and distance scales and modeling indicates >20% repeating magnitudes in some cases, implying it can help to narrow physical mechanisms for seismogenesis.more » « less
-
Abstract Clustering is a fundamental feature of earthquakes that impacts basic and applied analyses of seismicity. Events included in the existing short-duration instrumental catalogs are concentrated strongly within a very small fraction of the space–time volume, which is highly amplified by activity associated with the largest recorded events. The earthquakes that are included in instrumental catalogs are unlikely to be fully representative of the long-term behavior of regional seismicity. We illustrate this and other aspects of space–time earthquake clustering, and propose a quantitative clustering measure based on the receiver operating characteristic diagram. The proposed approach allows eliminating effects of marginal space and time inhomogeneities related to the geometry of the fault network and regionwide changes in earthquake rates, and quantifying coupled space–time variations that include aftershocks, swarms, and other forms of clusters. The proposed measure is used to quantify and compare earthquake clustering in southern California, western United States, central and eastern United States, Alaska, Japan, and epidemic-type aftershock sequence model results. All examined cases show a high degree of coupled space–time clustering, with the marginal space clustering dominating the marginal time clustering. Declustering earthquake catalogs can help clarify long-term aspects of regional seismicity and increase the signal-to-noise ratio of effects that are subtler than the strong clustering signatures. We illustrate how the high coupled space–time clustering can be decreased or removed using a data-adaptive parsimonious nearest-neighbor declustering approach, and emphasize basic unresolved issues on the proper outcome and quality metrics of declustering. At present, declustering remains an exploratory tool, rather than a rigorous optimization problem, and selecting an appropriate declustering method should depend on the data and problem at hand.more » « less
An official website of the United States government

