We introduce an algorithm for declustering earthquake catalogs based on the nearest‐neighbor analysis of seismicity. The algorithm discriminates between background and clustered events by random thinning that removes events according to a space‐varying threshold. The threshold is estimated using randomized‐reshuffled catalogs that are stationary, have independent space and time components, and preserve the space distribution of the original catalog. Analysis of catalog produced by the Epidemic Type Aftershock Sequence model demonstrates that the algorithm correctly classifies over 80% of background and clustered events, correctly reconstructs the stationary and space‐dependent background intensity, and shows high stability with respect to random realizations (over 75% of events have the same estimated type in over 90% of random realizations). The declustering algorithm is applied to the global Northern California Earthquake Data Center catalog with magnitudes
- NSF-PAR ID:
- 10338873
- Date Published:
- Journal Name:
- Seismological Research Letters
- Volume:
- 93
- Issue:
- 1
- ISSN:
- 0895-0695
- Page Range / eLocation ID:
- 386 to 401
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract m ≥ 4 during 2000–2015; a Southern California catalog withm ≥ 2.5, 3.5 during 1981–2017; an area around the 1992 Landers rupture zone withm ≥ 0.0 during 1981–2015; and the Parkfield segment of San Andreas fault withm ≥ 1.0 during 1984–2014. The null hypotheses of stationarity and space‐time independence are not rejected by several tests applied to the estimated background events of the global and Southern California catalogs with magnitude ranges Δm < 4. However, both hypotheses are rejected for catalogs with larger range of magnitudes Δm > 4. The deviations from the nulls are mainly due to local temporal fluctuations of seismicity and activity switching among subregions; they can be traced back to the original catalogs and represent genuine features of background seismicity. -
Abstract Earthquakes are clustered in space and time, with individual sequences composed of events linked by stress transfer and triggering mechanisms. On a global scale, variations in the productivity of earthquake sequences—a normalized measure of the number of triggered events—have been observed and associated with regional variations in tectonic setting. Here, we focus on resolving systematic variations in the productivity of crustal earthquake sequences in California and Nevada—the two most seismically active states in the western United States. We apply a well-tested nearest-neighbor algorithm to automatically extract earthquake sequence statistics from a unified 40 yr compilation of regional earthquake catalogs that is complete to M ∼ 2.5. We then compare earthquake sequence productivity to geophysical parameters that may influence earthquake processes, including heat flow, temperature at seismogenic depth, complexity of quaternary faulting, geodetic strain rates, depth to crystalline basement, and faulting style. We observe coherent spatial variations in sequence productivity, with higher values in the Walker Lane of eastern California and Nevada than along the San Andreas fault system in western California. The results illuminate significant correlations between productivity and heat flow, temperature, and faulting that contribute to the understanding and ability to forecast crustal earthquake sequences in the area.
-
null (Ed.)Exploring the spatiotemporal distribution of earthquake activity, especially earthquake migration of fault systems, can greatly to understand the basic mechanics of earthquakes and the assessment of earthquake risk. By establishing a three-dimensional strike-slip fault model, to derive the stress response and fault slip along the fault under regional stress conditions. Our study helps to create a long-term, complete earthquake catalog. We modelled Long-Short Term Memory (LSTM) networks for pattern recognition of the synthetical earthquake catalog. The performance of the models was compared using the mean-square error (MSE). Our results showed clearly the application of LSTM showed a meaningful result of 0.08% in the MSE values. Our best model can predict the time and magnitude of the earthquakes with a magnitude greater than Mw = 6.5 with a similar clustering period. These results showed conclusively that applying LSTM in a spatiotemporal series prediction provides a potential application in the study of earthquake mechanics and forecasting of major earthquake events.more » « less
-
Abstract Measures of foreshock occurrence are systematically examined using earthquake catalogs for eight regions (Italy, southern California, northern California, Costa Rica, Onshore Japan, Alaska, Turkey, and Greece) after imposing a magnitude ≥3.0 completeness level. Foreshocks are identified using three approaches: a magnitude-dependent space + fixed-time windowing method, a nearest-neighbor clustering method, and a modified magnitude-dependent space + variable-time windowing method. The method with fixed-time windows systematically yields higher counts of foreshocks than the other two clustering methods. We find similar counts of foreshocks across the three methods when the magnitude aperture is equalized by including only earthquakes in the magnitude range M*−2≤ M< M*, in which M* is the mainshock magnitude. For most of the catalogs (excluding Italy and southern California), the measured b-values of the foreshocks of all region-specific mainshocks are lower by 0.1–0.2 than b-values of respective aftershocks. Allowing for variable-time windows results in relatively high probabilities of having at least one foreshock in Italy (∼43%–56%), compared to other regional catalogs. Foreshock probabilities decrease to 14%–41% for regions such as Turkey, Greece, and Costa Rica. Similar trends are found when requiring at least five foreshocks in a sequence to be considered. Estimates of foreshock probabilities for each mainshock are method dependent; however, consistent regional trends exist regardless of method, with regions such as Italy and southern California producing more observable foreshocks than Turkey and Greece. Some regions with relatively high background seismicity have comparatively low probabilities of detectable foreshock activity when using methods that account for variable background, possibly due to depletion of near-failure fault conditions by background activity.more » « less
-
One of most universal statistical properties of earthquakes is the tendency to cluster in space and time. Yet while clustering is pervasive, individual earthquake sequences can vary markedly in duration, spatial extent, and time evolution. In July 2014, a prolific earthquake sequence initiated within the Sheldon Wildlife Refuge in northwest Nevada, USA. The sequence produced 26 M4 earthquakes and several hundred M3s, with no clear mainshock or obvious driving force. Here we combine a suite of seismological analysis techniques to better characterize this unusual earthquake sequence. High-precision relocations reveal a clear, east dipping normal fault as the dominant structure that intersects with a secondary, subvertical cross fault. Seismicity occurs in burst of activity along these two structures before eventually transitioning to shallower structures to the east. Inversion of hundreds of moment tensors constrain the overall normal faulting stress regime. Source spectral analysis suggests that the stress drops and rupture properties of these events are typical for tectonic earthquakes in the western US. While regional station coverage is sparse in this remote study region, the timely installation of a temporary seismometer allows us to detect nearly 70,000 earthquakes over a 40-month time period when the seismic activity is highest. Such immense productivity is difficult to reconcile with current understanding of crustal deformation in the region and may be facilitated by local hydrothermal processes and earthquake triggering at the transitional intersection of subparallel fault systems.