Megathrust earthquakes release and transfer stress that has accumulated over hundreds of years, leading to large aftershocks that can be highly destructive. Understanding the spatiotemporal pattern of megathrust aftershocks is key to mitigating the seismic hazard. However, conflicting observations show aftershocks concentrated either along the rupture surface itself, along its periphery or well beyond it, and they can persist for a few years to decades. Here we present aftershock data following the four largest megathrust earthquakes since 1960, focusing on the change in seismicity rate following the best-recorded 2011 Tohoku earthquake, which shows an initially high aftershock rate on the rupture surface that quickly shuts down, while a zone up to ten times larger forms a ring of enhanced seismicity around it. We find that the aftershock pattern of Tohoku and the three other megathrusts can be explained by rate and state Coulomb stress transfer. We suggest that the shutdown in seismicity in the rupture zone may persist for centuries, leaving seismicity gaps that can be used to identify prehistoric megathrust events. In contrast, the seismicity of the surrounding area decays over 4-6 decades, increasing the seismic hazard after a megathrust earthquake.
more »
« less
Online Few-Shot Time Series Classification for Aftershock Detection
Seismic monitoring systems sift through seismograms in real-time, searching for target events, such as underground explosions. In this monitoring system, a burst of aftershocks (minor earthquakes occur after a major earthquake over days or even years) can be a source of confounding signals. Such a burst of aftershock signals can overload the human analysts of the monitoring system. To alleviate this burden at the onset of a sequence of events (e.g., aftershocks), a human analyst can label the first few of these events and start an online classifier to filter out subsequent aftershock events. We propose an online few-shot classification model FewSig for time series data for the above use case. The framework of FewSig consists of a selective model to identify the high-confidence positive events which are used for updating the models and a general classifier to label the remaining events. Our specific technique uses a %two-level decision tree selective model based on sliding DTW distance and a general classifier model based on distance metric learning with Neighborhood Component Analysis (NCA). The algorithm demonstrates surprising robustness when tested on univariate datasets from the UEA/UCR archive. Furthermore, we show two real-world earthquake events where the FewSig reduces the human effort in monitoring applications by filtering out the aftershock events.
more »
« less
- Award ID(s):
- 2104537
- PAR ID:
- 10526417
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400701030
- Page Range / eLocation ID:
- 5707 to 5716
- Format(s):
- Medium: X
- Location:
- Long Beach CA USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
SUMMARY Earthquakes come in clusters formed of mostly aftershock sequences, swarms and occasional foreshock sequences. This clustering is thought to result either from stress transfer among faults, a process referred to as cascading, or from transient loading by aseismic slip (pre-slip, afterslip or slow slip events). The ETAS statistical model is often used to quantify the fraction of clustering due to stress transfer and to assess the eventual need for aseismic slip to explain foreshocks or swarms. Another popular model of clustering relies on the earthquake nucleation model derived from experimental rate-and-state friction. According to this model, earthquakes cluster because they are time-advanced by the stress change imparted by the mainshock. This model ignores stress interactions among aftershocks and cannot explain foreshocks or swarms in the absence of transient loading. Here, we analyse foreshock, swarm and aftershock sequences resulting from cascades in a Discrete Fault Network model governed by rate-and-state friction. We show that the model produces realistic swarms, foreshocks and aftershocks. The Omori law, characterizing the temporal decay of aftershocks, emerges in all simulations independently of the assumed initial condition. In our simulations, the Omori law results from the earthquake nucleation process due to rate and state friction and from the heterogeneous stress changes due to the coseismic stress transfers. By contrast, the inverse Omori law, which characterizes the accelerating rate of foreshocks, emerges only in the simulations with a dense enough fault system. A high-density complex fault zone favours fault interactions and the emergence of an accelerating sequence of foreshocks. Seismicity catalogues generated with our discrete fault network model can generally be fitted with the ETAS model but with some material differences. In the discrete fault network simulations, fault interactions are weaker in aftershock sequences because they occur in a broader zone of lower fault density and because of the depletion of critically stressed faults. The productivity of the cascading process is, therefore, significantly higher in foreshocks than in aftershocks if fault zone complexity is high. This effect is not captured by the ETAS model of fault interactions. It follows that a foreshock acceleration stronger than expected from ETAS statistics does not necessarily require aseismic slip preceding the mainshock (pre-slip). It can be a manifestation of a cascading process enhanced by the topological properties of the fault network. Similarly, earthquake swarms might not always imply transient loading by aseismic slip, as they can emerge from stress interactions.more » « less
-
Abstract Our study is to build an aftershock catalog with a low magnitude of completeness for the 2020 Mw 6.5 Stanley, Idaho, earthquake. This is challenging because of the low signal-to-noise ratios for recorded seismograms. Therefore, we apply convolutional neural networks (CNNs) and use 2D time–frequency feature maps as inputs for aftershock detection. Another trained CNN is used to automatically pick P-wave arrival times, which are then used in both nonlinear and double-difference earthquake location algorithms. Our new one-month-long catalog has 4644 events and a completeness magnitude (Mc) 1.9, which has over seven times more events and 0.9 lower Mc than the current U.S. Geological Survey National Earthquake Information Center catalog. The distribution and expansion of these aftershocks improve the resolution of two north-northwest-trending faults with different dip angles, providing further support for a central stepover region that changed the earthquake rupture trajectory and induced sustained seismicity.more » « less
-
Abstract The development of new earthquake forecasting models is often motivated by one of the following complementary goals: to gain new insights into the governing physics and to produce improved forecasts quantified by objective metrics. Often, one comes at the cost of the other. Here, we propose a question-driven ensemble (QDE) modeling approach to address both goals. We first describe flexible epidemic-type aftershock sequence (ETAS) models in which we relax the assumptions of parametrically defined aftershock productivity and background earthquake rates during model calibration. Instead, both productivity and background rates are calibrated with data such that their variability is optimally represented by the model. Then we consider 64 QDE models in pseudoprospective forecasting experiments for southern California and Italy. QDE models are constructed by combining model parameters of different ingredient models, in which the rules for how to combine parameters are defined by questions about the future seismicity. The QDE models can be interpreted as models that address different questions with different ingredient models. We find that certain models best address the same issues in both regions, and that QDE models can substantially outperform the standard ETAS and all ingredient models. The best performing QDE model is obtained through the combination of models allowing flexible background seismicity and flexible aftershock productivity, respectively, in which the former parameterizes the spatial distribution of background earthquakes and the partitioning of seismicity into background events and aftershocks, and the latter is used to parameterize the spatiotemporal occurrence of aftershocks.more » « less
-
Abstract Gulia and Wiemer (2019; hereafter, GW2019) proposed a near-real-time monitoring system to discriminate between foreshocks and aftershocks. Our analysis (Dascher-Cousineau et al., 2020; hereinater, DC2020) tested the sensitivity of the proposed Foreshock Traffic-Light System output to parameter choices left to expert judgment for the 2019 Ridgecrest Mw 7.1 and 2020 Puerto Rico Mw 6.4 earthquake sequences. In the accompanying comment, Gulia and Wiemer (2021) suggest that at least six different methodological deviations lead to different pseudoprospective warning levels, particularly for the Ridgecrest aftershock sequence which they had separately evaluated. Here, we show that for four of the six claimed deviations, we conformed to the criteria outlined in GW2019. Two true deviations from the defined procedure are clarified and justified here. We conclude as we did originally, by emphasizing the influence of expert judgment on the outcome in the analysis.more » « less
An official website of the United States government

