Abstract We introduce an algorithm for declustering earthquake catalogs based on the nearest‐neighbor analysis of seismicity. The algorithm discriminates between background and clustered events by random thinning that removes events according to a space‐varying threshold. The threshold is estimated using randomized‐reshuffled catalogs that are stationary, have independent space and time components, and preserve the space distribution of the original catalog. Analysis of catalog produced by the Epidemic Type Aftershock Sequence model demonstrates that the algorithm correctly classifies over 80% of background and clustered events, correctly reconstructs the stationary and space‐dependent background intensity, and shows high stability with respect to random realizations (over 75% of events have the same estimated type in over 90% of random realizations). The declustering algorithm is applied to the global Northern California Earthquake Data Center catalog with magnitudesm≥ 4 during 2000–2015; a Southern California catalog withm≥ 2.5, 3.5 during 1981–2017; an area around the 1992 Landers rupture zone withm≥ 0.0 during 1981–2015; and the Parkfield segment of San Andreas fault withm≥ 1.0 during 1984–2014. The null hypotheses of stationarity and space‐time independence are not rejected by several tests applied to the estimated background events of the global and Southern California catalogs with magnitude ranges Δm< 4. However, both hypotheses are rejected for catalogs with larger range of magnitudes Δm> 4. The deviations from the nulls are mainly due to local temporal fluctuations of seismicity and activity switching among subregions; they can be traced back to the original catalogs and represent genuine features of background seismicity. 
                        more » 
                        « less   
                    
                            
                            Using Deep Learning for Flexible and Scalable Earthquake Forecasting
                        
                    
    
            Abstract Seismology is witnessing explosive growth in the diversity and scale of earthquake catalogs. A key motivation for this community effort is that more data should translate into better earthquake forecasts. Such improvements are yet to be seen. Here, we introduce the Recurrent Earthquake foreCAST (RECAST), a deep‐learning model based on recent developments in neural temporal point processes. The model enables access to a greater volume and diversity of earthquake observations, overcoming the theoretical and computational limitations of traditional approaches. We benchmark against a temporal Epidemic Type Aftershock Sequence model. Tests on synthetic data suggest that with a modest‐sized data set, RECAST accurately models earthquake‐like point processes directly from cataloged data. Tests on earthquake catalogs in Southern California indicate improved fit and forecast accuracy compared to our benchmark when the training set is sufficiently long (>104events). The basic components in RECAST add flexibility and scalability for earthquake forecasting without sacrificing performance. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1761987
- PAR ID:
- 10462172
- Publisher / Repository:
- DOI PREFIX: 10.1029
- Date Published:
- Journal Name:
- Geophysical Research Letters
- Volume:
- 50
- Issue:
- 17
- ISSN:
- 0094-8276
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Repeating earthquakes—sequences of colocated, quasi-periodic earthquakes of similar size—are widespread along California’s San Andreas fault (SAF) system. Catalogs of repeating earthquakes are vital for studying earthquake source processes, fault properties, and improving seismic hazard models. Here, we introduce an unsupervised machine learning-based method for detecting repeating earthquake sequences (RES) to expand existing RES catalogs or to perform initial, exploratory searches. We implement the “SpecUFEx” algorithm (Holtzman et al., 2018) to reduce earthquake spectrograms into low-dimensional, characteristic fingerprints, and apply hierarchical clustering to group similar fingerprints together independent of location, allowing for a global search for potential RES throughout the data set. We then relocate the potential RES and subject them to the same detection criteria as Waldhauser and Schaff (2021). We apply our method to ∼4000 small (ML 0–3.5) earthquakes located on a 10 km long segment of the creeping SAF and double the number of detected RES, allowing for greater spatial coverage of slip-rate estimations at seismogenic depths. Our method is novel in its ability to detect RES independent of initial locations and is complimentary to existing cross-correlation-based methods, leading to more complete RES catalogs and a better understanding of slip rates at depth.more » « less
- 
            Abstract Seismic phase association is a fundamental task in seismology that pertains to linking together phase detections on different sensors that originate from a common earthquake. It is widely employed to detect earthquakes on permanent and temporary seismic networks and underlies most seismicity catalogs produced around the world. This task can be challenging because the number of sources is unknown, events frequently overlap in time, or can occur simultaneously in different parts of a network. We present PhaseLink, a framework based on recent advances in deep learning for grid‐free earthquake phase association. Our approach learns to link phases together that share a common origin and is trained entirely on millions of synthetic sequences ofPandSwave arrival times generated using a 1‐D velocity model. Our approach is simple to implement for any tectonic regime, suitable for real‐time processing, and can naturally incorporate errors in arrival time picks. Rather than tuning a set of ad hoc hyperparameters to improve performance, PhaseLink can be improved by simply adding examples of problematic cases to the training data set. We demonstrate the state‐of‐the‐art performance of PhaseLink on a challenging sequence from southern California and synthesized sequences from Japan designed to test the point at which the method fails. For the examined data sets, PhaseLink can precisely associate phases to events that occur only ∼12 s apart in origin time. This approach is expected to improve the resolution of seismicity catalogs, add stability to real‐time seismic monitoring, and streamline automated processing of large seismic data sets.more » « less
- 
            Abstract Earthquakes are clustered in space and time, with individual sequences composed of events linked by stress transfer and triggering mechanisms. On a global scale, variations in the productivity of earthquake sequences—a normalized measure of the number of triggered events—have been observed and associated with regional variations in tectonic setting. Here, we focus on resolving systematic variations in the productivity of crustal earthquake sequences in California and Nevada—the two most seismically active states in the western United States. We apply a well-tested nearest-neighbor algorithm to automatically extract earthquake sequence statistics from a unified 40 yr compilation of regional earthquake catalogs that is complete to M ∼ 2.5. We then compare earthquake sequence productivity to geophysical parameters that may influence earthquake processes, including heat flow, temperature at seismogenic depth, complexity of quaternary faulting, geodetic strain rates, depth to crystalline basement, and faulting style. We observe coherent spatial variations in sequence productivity, with higher values in the Walker Lane of eastern California and Nevada than along the San Andreas fault system in western California. The results illuminate significant correlations between productivity and heat flow, temperature, and faulting that contribute to the understanding and ability to forecast crustal earthquake sequences in the area.more » « less
- 
            The clustering of earthquake magnitudes is poorly understood compared to spatial and temporal clustering. Better understanding of correlations between earthquake magnitudes could provide insight into the mechanisms of earthquake rupture and fault interactions, and improve earthquake forecasting models. In this study we present a novel method of examining how seismic magnitude clustering occurs beyond the next event in the catalog and evolves with time and space between earthquake events. We first evaluate the clustering signature over time and space using double-difference located catalogs from Southern and Northern California. The strength of magnitude clustering appears to decay linearly with distance between events and logarithmically with time. The signature persists for longer distances (more than 50km) and times (several days) than previously thought, indicating that magnitude clustering is not driven solely by repeated rupture of an identical fault patch or Omori aftershock processes. The decay patterns occur in all magnitude ranges of the catalog and are demonstrated across multiple methodologies of study. These patterns are also shown to be present in laboratory rock fracture catalogs but absent in ETAS synthetic catalogs. Incorporating magnitude clustering decay patterns into earthquake forecasting models such as ETAS could improve their accuracy.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
