Abstract We examine the behavior of natural basaltic and trachytic samples during paleointensity experiments on both the original and laboratory‐acquired thermal remanences and characterize the samples using proxies for domain state including curvature (k) and the bulk domain stability parameters of Paterson (2011,https://doi.org/10.1029/2011JB008369) and Paterson et al. (2017,https://doi.org/10.1073/pnas.1714047114), respectively. A curvature value of 0.164 (suggested by Paterson, 2011,https://doi.org/10.1029/2011JB008369) as a critical threshold that separates single‐domain‐like remanences from multidomain‐like remanances on the original paleointensity data was used to separate samples into “straight” (single‐domain‐like) and “curved” (multidomain‐like) groups. Specimens from the two sample sets were given a “fresh” thermal remanent magnetization in a 70 μT field and subjected to an infield‐zerofield, zerofield‐infield (IZZI)‐type (Yu et al., 2004,https://doi.org/10.1029/2003GC000630) paleointensity experiment. The straight sample set recovered the laboratory field with high precision while the curved set had much more scattered results (70.5 ± 1.5 and 71.9 ± 5.2 μT, respectively). The average intensity of both sets for straight and curved was quite close to the laboratory field of 70 μT, however, suggesting that if experiments contain a sufficient number of specimens, there does not seem to be a large bias in the field estimate. We found that the dependence of the laboratory thermal remanent magnetization on cooling rate was significant in most samples and did not depend on domain states inferred from proxies based on hysteresis measurements and should be estimated for all samples whose cooling rates differ from that used in the laboratory. 
                        more » 
                        « less   
                    This content will become publicly available on May 15, 2026
                            
                            Enhancing the fidelity of social media image data sets in earthquake damage assessment
                        
                    
    
            The development of artificial intelligence (AI) provides an opportunity for rapid and accurate assessment of earthquake-induced infrastructure damage using social media images. Nevertheless, data collection and labeling remain challenging due to limited expertise among annotators. This study introduces a novel four-class Earthquake Infrastructure Damage (EID) assessment data set compiled from a combination of images from several other social media image databases but with added emphasis on data quality. Unlike the previous data sets such as Damage Assessment Dataset (DAD) and Crisis Benchmark, the EID includes comprehensive labeling guidelines and a multiclass classification system aligned with established damage scales, such as HAZUS and EMS-98, to enhance the accuracy and utility of social media imagery for disaster response. By integrating detailed descriptions and clear labeling criteria, the labeling approach of EID reduces the subjective nature of image labeling and the inconsistencies found in existing data sets. The findings demonstrate a significant improvement in annotator agreement, reducing disagreement from 39.7% to 10.4%, thereby validating the efficacy of the refined labeling strategy. The EID, containing 13,513 high-quality images from five significant earthquakes, is designed to support community-level assessments and advanced computational research, paving the way for enhanced disaster response strategies through improved data utilization and analysis. The data set is available at DesignSafe:https://doi.org/10.17603/ds2-yj8p-hs62. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1826118
- PAR ID:
- 10613715
- Publisher / Repository:
- Earthquake Spectra
- Date Published:
- Journal Name:
- Earthquake Spectra
- ISSN:
- 8755-2930
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Low‐frequency earthquakes are a seismic manifestation of slow fault slip. Their emergent onsets, low amplitudes, and unique frequency characteristics make these events difficult to detect in continuous seismic data. Here, we train a convolutional neural network to detect low‐frequency earthquakes near Parkfield, CA using the catalog of Shelly (2017),https://doi.org/10.1002/2017jb014047as training data. We explore how varying model size and targets influence the performance of the resulting network. Our preferred network has a peak accuracy of 85% and can reliably pick low‐frequency earthquake (LFE) S‐wave arrival times on single station records. We demonstrate the abilities of the network using data from permanent and temporary stations near Parkfield, and show that it detects new LFEs that are not part of the Shelly (2017),https://doi.org/10.1002/2017jb014047catalog. Overall, machine‐learning approaches show great promise for identifying additional low‐frequency earthquake sources. The technique is fast, generalizable, and does not require sources to repeat.more » « less
- 
            Abstract Elevated seismic noise for moderate‐size earthquakes recorded at teleseismic distances has limited our ability to see their complexity. We develop a machine‐learning‐based algorithm to separate noise and earthquake signals that overlap in frequency. The multi‐task encoder‐decoder model is built around a kernel pre‐trained on local (e.g., short distances) earthquake data (Yin et al., 2022,https://doi.org/10.1093/gji/ggac290) and is modified by continued learning with high‐quality teleseismic data. We denoise teleseismic P waves of deep Mw5.0+ earthquakes and use the clean P waves to estimate source characteristics with reduced uncertainties of these understudied earthquakes. We find a scaling of moment and duration to beM0 ≃ τ4, and a resulting strong scaling of stress drop and radiated energy with magnitude ( and ). The median radiation efficiency is 5%, a low value compared to crustal earthquakes. Overall, we show that deep earthquakes have weak rupture directivity and few subevents, suggesting a simple model of a circular crack with radial rupture propagation is appropriate. When accounting for their respective scaling with earthquake size, we find no systematic depth variations of duration, stress drop, or radiated energy within the 100–700 km depth range. Our study supports the findings of Poli and Prieto (2016,https://doi.org/10.1002/2016jb013521) with a doubled amount of earthquakes investigated and with earthquakes of lower magnitudes.more » « less
- 
            Abstract The polarFregion ionosphere frequently exhibits sporadic variability (e.g., Meek, 1949,https://doi.org/10.1029/JZ054i004p00339; Hill, 1963,https://doi.org/10.1175/1520‐0469(1963)020<0492:SEOLII>2.0.CO;2). Recent satellite data analysis (Noja et al., 2013,https://doi.org/10.1002/rds.20033; Chartier et al., 2018,https://doi.org/10.1002/2017JA024811) showed that the high‐latitudeFregion ionosphere exhibits sporadic enhancements more frequently in January than in July in both the northern and southern hemispheres. The same pattern has been seen in statistics of the degradation and total loss of GPS service onboard low‐Earth orbit satellites (Xiong et al. 2018,https://doi.org/10.5194/angeo‐36‐679‐2018). Here, we confirm the existence of this annual pattern using ground GPS‐based images of TEC from the MIDAS algorithm. Images covering January and July 2014 confirm that the high‐latitude (>70 MLAT)Fregion exhibits a substantially larger range of values in January than in July in both the northern and southern hemispheres. The range of TEC values observed in the polar caps is 38–57 TECU (north‐south) in January versus 25–37 TECU in July. First‐principle modeling using SAMI3 reproduces this pattern, and indicates that it is caused by an asymmetry in plasma levels (30% higher in January than in July across both polar caps), as well as 17% longer O+plasma lifetimes in northern hemisphere winter, compared to southern hemisphere winter.more » « less
- 
            Gaining timely and reliable situation awareness after hazard events such as a hurricane is crucial to emergency managers and first responders. One effective way to achieve that goal is through damage assessment. Recently, disaster researchers have been utilizing imagery captured through satellites or drones to quantify the number of flooded/damaged buildings. In this paper, we propose a mixed‐data approach, which leverages publicly available satellite imagery and geolocation features of the affected area to identify damaged buildings after a hurricane. The method demonstrated significant improvement from performing a similar task using only imagery features, based on a case study of Hurricane Harvey affecting Greater Houston area in 2017. This result opens door to a wide range of possibilities to unify the advancement in computer vision algorithms such as convolutional neural networks and traditional methods in damage assessment, for example, using flood depth or bare‐earth topology. In this work, a creative choice of the geolocation features was made to provide extra information to the imagery features, but it is up to the users to decide which other features can be included to model the physical behavior of the events, depending on their domain knowledge and the type of disaster. The data set curated in this work is made openly available (DOI: 10.17603/ds2‐3cca‐f398).more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
