Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            ABSTRACT The foundation of earthquake monitoring is the ability to rapidly detect, locate, and estimate the size of seismic sources. Earthquake magnitudes are particularly difficult to rapidly characterize because magnitude types are only applicable to specific magnitude ranges, and location errors propagate to substantial magnitude errors. We developed a method for rapid estimation of single-station earthquake magnitudes using raw three-component P waveforms observed at local to teleseismic distances, independent of prior size or location information. We used the MagNet regression model architecture (Mousavi and Beroza, 2020b), which combines convolutional and recurrent neural networks. We trained our model using ∼2.4 million P-phase arrivals labeled by the authoritative magnitude assigned by the U.S. Geological Survey. We tested input data parameters (e.g., window length) that could affect the performance of our model in near-real-time monitoring applications. At the longest waveform window length of 114 s, our model (Artificial Intelligence Magnitude [AIMag]) is accurate (median estimated magnitude within ±0.5 magnitude units from catalog magnitude) between M 2.3 and 7.6. However, magnitudes above M ∼7 are more underestimated as true magnitude increases. As the windows are shortened down to 1 s, the point at which higher magnitudes begin to be underestimated moves toward lower magnitudes, and the degree of underestimation increases. The over and underestimation of magnitudes for the smallest and largest earthquakes, respectively, are potentially related to the limited number of events in these ranges within the training data, as well as magnitude saturation effects related to not capturing the full source time function of large earthquakes. Importantly, AIMag can determine earthquake magnitudes with individual stations’ waveforms without instrument response correction or knowledge of an earthquake’s source-station distance. This work may enable monitoring agencies to more rapidly recognize large, potentially tsunamigenic global earthquakes from few stations, allowing for faster event processing and reporting. This is critical for timely warnings for seismic-related hazards.more » « less
- 
            Abstract Earthquake early warning (EEW) systems aim to forecast the shaking intensity rapidly after an earthquake occurs and send warnings to affected areas before the onset of strong shaking. The system relies on rapid and accurate estimation of earthquake source parameters. However, it is known that source estimation for large ruptures in real‐time is challenging, and it often leads to magnitude underestimation. In a previous study, we showed that machine learning, HR‐GNSS, and realistic rupture synthetics can be used to reliably predict earthquake magnitude. This model, called Machine‐Learning Assessed Rapid Geodetic Earthquake model (M‐LARGE), can rapidly forecast large earthquake magnitudes with an accuracy of 99%. Here, we expand M‐LARGE to predict centroid location and fault size, enabling the construction of the fault rupture extent for forecasting shaking intensity using existing ground motion models. We test our model in the Chilean Subduction Zone with thousands of simulated and five real large earthquakes. The result achieves an average warning time of 40.5 s for shaking intensity MMI4+, surpassing the 34 s obtained by a similar GNSS EEW model. Our approach addresses a critical gap in existing EEW systems for large earthquakes by demonstrating real‐time fault tracking feasibility without saturation issues. This capability leads to timely and accurate ground motion forecasts and can support other methods, enhancing the overall effectiveness of EEW systems. Additionally, the ability to predict source parameters for real Chilean earthquakes implies that synthetic data, governed by our understanding of earthquake scaling, is consistent with the actual rupture processes.more » « less
- 
            Abstract Stochastic slip rupture modeling is a computationally efficient, reduced‐physics approximation that has the capability to create large numbers of unique ruptures based only on a few statistical assumptions. Yet one fundamental question pertaining to this approach is whether the slip distributions calculated in this way are “realistic.” Rather, can stochastic modeling reproduce slip distributions that match what is seen inM9+ events recorded in instrumental time? We focus here on testing the ability of the von Karman ACF method for stochastic slip modeling to reproduceM9+ events. We start with the 2011M9.1 Tohoku‐Oki earthquake and tsunami where we test both a stochastic method with a homogeneous background mean model and a method where slip is informed by an additional interseismic coupling constraint. We test two coupling constraints with varying assumptions of either trench‐locking or ‐creeping and assess their influence on the calculated ruptures. We quantify the dissimilarity between the 12,000 modeled ruptures and a slip inversion for the Tohoku earthquake. We also model tsunami inundation for over 300 ruptures and compare the results to an inundation survey along the eastern coastline of Japan. We conclude that stochastic slip modeling produces ruptures that can be considered “Tohoku‐like,” and inclusion of coupling can both positively and negatively influence the ability to create realistic ruptures. We then expand our study to show that for the 1960M9.4–9.6 Chile, 1964M9.2 Alaska, and 2004M9.1–9.3 Sumatra events, stochastic slip modeling has the capability to produce ruptures that compare favorably to those events.more » « less
- 
            Abstract At subduction zones, the down‐dip limit of slip represents how deep an earthquake can rupture. For hazards it is important ‐ it controls the intensity of shaking and the pattern of coseismic uplift and subsidence. In the Cascadia Subduction Zone, because no large magnitude events have been observed in instrumental times, the limit is inferred from geological estimates of coastal subsidence during previous earthquakes; it is typically assumed to coincide approximately with the coastline. This is at odds with geodetic coupling models as it leaves residual slip deficits unaccommodated on a large swath of the megathrust. Here we will show that ruptures can penetrate deeper into the megathrust and still produce coastal subsidence provided slip decreases with depth. We will discuss the impacts of this on expected shaking intensities.more » « less
- 
            Earthquake early warning systems use synthetic data from simulation frameworks like MudPy to train models for predicting the magnitudes of large earthquakes. MudPy, although powerful, has limitations: a lengthy simulation time to generate the required data, lack of user-friendliness, and no platform for discovering and sharing its data. We introduce FakeQuakes DAGMan Workflow (FDW), which utilizes Open Science Grid (OSG) for parallel computations to accelerate and streamline MudPy simulations. FDW significantly reduces runtime and increases throughput compared to a single-machine setup. Using FDW, we also explore partitioned parallel HTCondor DAGMan workflows to enhance OSG efficiency. Additionally, we investigate leveraging cyberinfrastructure, such as Virtual Data Collaboratory (VDC), for enhancing MudPy and OSG. Specifically, we simulate using Cloud bursting policies to enforce FDW job-offloading to VDC during OSG peak demand, addressing shared resource issues and user goals; we also discuss VDC’s value in facilitating a platform for broad access to MudPy products.more » « less
- 
            Data-driven approaches to identify geophysical signals have proven beneficial in high dimensional environments where model-driven methods fall short. GNSS offers a source of unsaturated ground motion observations that are the data currency of ground motion forecasting and rapid seismic hazard assessment and alerting. However, these GNSS-sourced signals are superposed onto hardware-, location- and time-dependent noise signatures influenced by the Earth’s atmosphere, low-cost or spaceborne oscillators, and complex radio frequency environments. Eschewing heuristic or physics based models for a data-driven approach in this context is a step forward in autonomous signal discrimination. However, the performance of a data-driven approach depends upon substantial representative samples with accurate classifications, and more complex algorithm architectures for deeper scientific insights compound this need. The existing catalogs of high-rate (≥1Hz) GNSS ground motions are relatively limited. In this work, we model and evaluate the probabilistic noise of GNSS velocity measurements over a hemispheric network. We generate stochastic noise time series to augment transferred low-noise strong motion signals from within 70 kilometers of strong events (≥ MW 5.0) from an existing inertial catalog. We leverage known signal and noise information to assess feature extraction strategies and quantify augmentation benefits. We find a classifier model trained on this expanded pseudo-synthetic catalog improves generalization compared to a model trained solely on a real-GNSS velocity catalog, and offers a framework for future enhanced data driven approaches.more » « less
- 
            null (Ed.)ABSTRACT We present an approach for generating stochastic scenario rupture models and semistochastic broadband seismic waveforms that include validated P waves, an important feature for application to early warning systems testing. There are few observations of large magnitude earthquakes available for development and refinement of early warning procedures; thus, simulated data are a valuable supplement. We demonstrate the advantage of using the Karhunen–Loève expansion method for generating stochastic scenario rupture models, as it allows the user to build in desired spatial qualities, such as a slip inversion, as a mean background slip model. For waveform computation, we employ a deterministic approach at low frequencies (<1 Hz) and a semistochastic approach at high frequencies (>1 Hz). Our approach follows Graves and Pitarka (2010) and extends to model P waves. We present the first validation of semistochastic broadband P waves, comparing our waveforms against observations of the 2014 Mw 8.1 Iquique, Chile, earthquake in the time domain and across frequencies of interest. We then consider the P waves in greater detail, using a set of synthetic waveforms generated for scenario ruptures in the Cascadia subduction zone. We confirm that the time-dependent synthetic P-wave amplitude growth is consistent with previous analyses and demonstrate how the data could be used to simulate earthquake early warning procedures.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
