skip to main content

Title: ARRU Phase Picker: Attention Recurrent-Residual U-Net for Picking Seismic P - and S -Phase Arrivals
Abstract Seismograms are convolution results between seismic sources and the media that seismic waves propagate through, and, therefore, the primary observations for studying seismic source parameters and the Earth interior. The routine earthquake location and travel-time tomography rely on accurate seismic phase picks (e.g., P and S arrivals). As data increase, reliable automated seismic phase-picking methods are needed to analyze data and provide timely earthquake information. However, most traditional autopickers suffer from low signal-to-noise ratio and usually require additional efforts to tune hyperparameters for each case. In this study, we proposed a deep-learning approach that adapted soft attention gates (AGs) and recurrent-residual convolution units (RRCUs) into the backbone U-Net for seismic phase picking. The attention mechanism was implemented to suppress responses from waveforms irrelevant to seismic phases, and the cooperating RRCUs further enhanced temporal connections of seismograms at multiple scales. We used numerous earthquake recordings in Taiwan with diverse focal mechanisms, wide depth, and magnitude distributions, to train and test our model. Setting the picking errors within 0.1 s and predicted probability over 0.5, the AG with recurrent-residual convolution unit (ARRU) phase picker achieved the F1 score of 98.62% for P arrivals and 95.16% for S arrivals, and picking rates were more » 96.72% for P waves and 90.07% for S waves. The ARRU phase picker also shown a great generalization capability, when handling unseen data. When applied the model trained with Taiwan data to the southern California data, the ARRU phase picker shown no cognitive downgrade. Comparing with manual picks, the arrival times determined by the ARRU phase picker shown a higher consistency, which had been evaluated by a set of repeating earthquakes. The arrival picks with less human error could benefit studies, such as earthquake location and seismic tomography. « less
Authors:
; ; ; ;
Award ID(s):
1725729
Publication Date:
NSF-PAR ID:
10297784
Journal Name:
Seismological Research Letters
Volume:
92
Issue:
4
Page Range or eLocation-ID:
2410 to 2428
ISSN:
0895-0695
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Accurate and (near) real-time earthquake monitoring provides the spatial and temporal behaviors of earthquakes for understanding the nature of earthquakes, and also helps in regional seismic hazard assessments and mitigations. Because of the increase in both the quality and quantity of seismic data, an automated earthquake monitoring system is needed. Most of the traditional methods for detecting earthquake signals and picking phases are based on analyses of features in recordings of an individual earthquake and/or their differences from background noises. When seismicity is high, the seismograms are complicated, and, therefore, traditional analysis methods often fail. With the development of machine learning algorithms, earthquake signal detection and seismic phase picking can be more accurate using the features obtained from a large amount of earthquake recordings. We have developed an attention recurrent residual U-Net algorithm, and used data augmentation techniques to improve the accuracy of earthquake detection and seismic phase picking on complex seismograms that record multiple earthquakes. The use of probability functions of P and S arrivals and potential P and S arrival pairs of earthquakes can increase the computational efficiency and accuracy of backprojection for earthquake monitoring in large areas. We applied our workflow to monitor the earthquake activitymore »in southern California during the 2019 Ridgecrest sequence. The distribution of earthquakes determined by our method is consistent with that in the Southern California Earthquake Data Center (SCEDC) catalog. In addition, the number of earthquakes in our catalog is more than three times that of the SCEDC catalog. Our method identifies additional earthquakes that are close in origin times and/or locations, and are not included in the SCEDC catalog. Our algorithm avoids misidentification of seismic phases for earthquake location. In general, our algorithm can provide reliable earthquake monitoring on a large area, even during a high seismicity period.« less
  2. SUMMARY A fleet of autonomously drifting profiling floats equipped with hydrophones, known by their acronym mermaid, monitors worldwide seismic activity from inside the oceans. The instruments are programmed to detect and transmit acoustic pressure conversions from teleseismic P wave arrivals for use in mantle tomography. Reporting seismograms in near-real time, within hours or days after they were recorded, the instruments are not usually recovered, but if and when they are, their memory buffers can be read out. We present a unique 1-yr-long data set of sound recorded at frequencies between 0.1 and 20 Hz in the South Pacific around French Polynesia by a mermaid float that was, in fact, recovered. Using time-domain, frequency-domain and time-frequency-domain techniques to comb through the time-series, we identified signals from 213 global earthquakes known to published catalogues, with magnitudes 4.6–8.0, and at epicentral distances between 24° and 168°. The observed signals contain seismoacoustic conversions of compressional and shear waves travelling through crust, mantle and core, including P, S, Pdif, Sdif, PKIKP, SKIKS, surface waves and hydroacoustic T phases. Only 10 earthquake records had been automatically reported by the instrument—the others were deemed low-priority by the onboard processing algorithm. After removing all seismic signals from the record,more »and also those from other transient, dominantly non-seismic, sources, we are left with the infrasonic ambient noise field recorded at 1500 m depth. We relate the temporally varying noise spectral density to a time-resolved ocean-wave model, WAVEWATCH III. The noise record is extremely well explained, both in spectral shape and in temporal variability, by the interaction of oceanic surface gravity waves. These produce secondary microseisms at acoustic frequencies between 0.1 and 1 Hz according to the well-known frequency-doubling mechanism.« less
  3. SUMMARY

    Precisely constraining the source parameters of large earthquakes is one of the primary objectives of seismology. However, the quality of the results relies on the quality of synthetic earth response. Although earth structure is laterally heterogeneous, particularly at shallow depth, most earthquake source studies at the global scale rely on the Green's functions calculated with radially symmetric (1-D) earth structure. To avoid the impact of inaccurate Green's functions, these conventional source studies use a limited set of seismic phases, such as long-period seismic waves, broad-band P and S waves in teleseismic distances (30° < ∆ < 90°), and strong ground motion records at close-fault stations. The enriched information embedded in the broad-band seismograms recorded by global and regional networks is largely ignored, limiting the spatiotemporal resolution. Here we calculate 3-D strain Green's functions at 30 GSN stations for source regions of 9 selected global earthquakes and one earthquake-prone area (California), with frequency up to 67 mHz (15 s), using SPECFEM3D_GLOBE and the reciprocity theorem. The 3-D SEM mesh model is composed of mantle model S40RTS, crustal model CRUST2.0 and surface topography ETOPO2. We surround each target event with grids in horizontal spacing of 5 km and vertical spacing of 2.0–3.0 km, allowing usmore »to investigate not only the main shock but also the background seismicity. In total, the response at over 210 000 source points is calculated in simulation. The number of earthquakes, including different focal mechanisms, centroid depth range and tectonic background, could further increase without additional computational cost if they were properly selected to avoid overloading individual CPUs. The storage requirement can be reduced by two orders of magnitude if the output strain Green's functions are stored for periods over 15 s. We quantitatively evaluate the quality of these 3-D synthetic seismograms, which are frequency and phase dependent, for each source region using nearby aftershocks, before using them to constrain the focal mechanisms and slip distribution. Case studies show that using a 3-D earth model significantly improves the waveform similarity, agreement in amplitude and arrival time of seismic phases with the observations. The limitations of current 3-D models are still notable, dependent on seismic phases and frequency range. The 3-D synthetic seismograms cannot well match the high frequency (>40 mHz) S wave and (>20 mHz) Rayleigh wave yet. Though the mean time-shifts are close to zero, the standard deviations are notable. Careful calibration using the records of nearby better located earthquakes is still recommended to take full advantage of better waveform similarity due to the use of 3-D models. Our results indicate that it is now feasible to systematically study global large earthquakes using full 3-D earth response in a global scale.

    « less
  4. SUMMARY

    Accurate synthetic seismic wavefields can now be computed in 3-D earth models using the spectral element method (SEM), which helps improve resolution in full waveform global tomography. However, computational costs are still a challenge. These costs can be reduced by implementing a source stacking method, in which multiple earthquake sources are simultaneously triggered in only one teleseismic SEM simulation. One drawback of this approach is the perceived loss of resolution at depth, in particular because high-amplitude fundamental mode surface waves dominate the summed waveforms, without the possibility of windowing and weighting as in conventional waveform tomography.

    This can be addressed by redefining the cost-function and computing the cross-correlation wavefield between pairs of stations before each inversion iteration. While the Green’s function between the two stations is not reconstructed as well as in the case of ambient noise tomography, where sources are distributed more uniformly around the globe, this is not a drawback, since the same processing is applied to the 3-D synthetics and to the data, and the source parameters are known to a good approximation. By doing so, we can separate time windows with large energy arrivals corresponding to fundamental mode surface waves. This opens the possibility ofmore »designing a weighting scheme to bring out the contribution of overtones and body waves. It also makes it possible to balance the contributions of frequently sampled paths versus rarely sampled ones, as in more conventional tomography.

    Here we present the results of proof of concept testing of such an approach for a synthetic 3-component long period waveform data set (periods longer than 60 s), computed for 273 globally distributed events in a simple toy 3-D radially anisotropic upper mantle model which contains shear wave anomalies at different scales. We compare the results of inversion of 10 000 s long stacked time-series, starting from a 1-D model, using source stacked waveforms and station-pair cross-correlations of these stacked waveforms in the definition of the cost function. We compute the gradient and the Hessian using normal mode perturbation theory, which avoids the problem of cross-talk encountered when forming the gradient using an adjoint approach. We perform inversions with and without realistic noise added and show that the model can be recovered equally well using one or the other cost function.

    The proposed approach is computationally very efficient. While application to more realistic synthetic data sets is beyond the scope of this paper, as well as to real data, since that requires additional steps to account for such issues as missing data, we illustrate how this methodology can help inform first order questions such as model resolution in the presence of noise, and trade-offs between different physical parameters (anisotropy, attenuation, crustal structure, etc.) that would be computationally very costly to address adequately, when using conventional full waveform tomography based on single-event wavefield computations.

    « less
  5. ABSTRACT Rapid association of seismic phases and event location are crucial for real‐time seismic monitoring. We propose a new method, named rapid earthquake association and location (REAL), for associating seismic phases and locating seismic events rapidly, simultaneously, and automatically. REAL combines the advantages of both pick‐based and waveform‐based detection and location methods. It associates arrivals of different seismic phases and locates seismic events primarily through counting the number of P and S picks and secondarily from travel‐time residuals. A group of picks are associated with a particular earthquake if there are enough picks within the theoretical travel‐time windows. The location is determined to be at the grid point with the most picks, and if multiple locations have the same maximum number of picks, the grid point among them with smallest travel‐time residuals. We refine seismic locations using a least‐squares location method (VELEST) and a high‐precision relative location method (hypoDD). REAL can be used for rapid seismic characterization due to its computational efficiency. As an example application, we apply REAL to earthquakes in the 2016 central Apennines, Italy, earthquake sequence occurring during a five‐day period in October 2016, midway in time between the two largest earthquakes. We associate and locate moremore »than three times as many events (3341) as are in Italy's National Institute of Geophysics and Volcanology routine catalog (862). The spatial distribution of these relocated earthquakes shows a similar but more concentrated pattern relative to the cataloged events. Our study demonstrates that it is possible to characterize seismicity automatically and quickly using REAL and seismic picks.« less