skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: High-throughput experiments for rare-event rupture of materials
The conditions for rupture of a material commonly vary from sample to sample. Of great importance to applications are the conditions for rare-event rupture, but their measurements require many sam- ples and consume much time. Here, the conditions for rare-event rupture are measured by developing a high-throughput experiment. For each run of the experiment, 1,000 samples are printed under the same nominal conditions and pulled simultaneously to the same stretch. Identifying the rupture of individual samples is automated by processing the video of the experiment. Under monotonic load, the rupture stretch for each sample is recorded. Under cyclic load, the number of cycles to rupture for each sample is also recorded. Rare-event rupture is studied by using the Weibull distribution and the peak-over-threshold method. This work reaffirms that predict- ing rare events requires large datasets. The high-throughput exper- iments enable the prediction of rare events with high accuracy and confidence.  more » « less
Award ID(s):
2011754
PAR ID:
10501563
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Cell Press
Date Published:
Journal Name:
Matter
Volume:
5
Issue:
2
ISSN:
2590-2385
Page Range / eLocation ID:
654 to 665
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Risk assessment of power system failures induced by low-frequency, high-impact rare events is of paramount importance to power system planners and operators. In this paper, we develop a cost-effective multi-surrogate method based on multifidelity model for assessing risks in probabilistic power-flow analysis under rare events. Specifically, multiple polynomial-chaos-expansion-based surrogate models are constructed to reproduce power system responses to the stochastic changes of the load and the random occurrence of component outages. These surrogates then propagate a large number of samples at negligible computation cost and thus efficiently screen out the samples associated with high-risk rare events. The results generated by the surrogates, however, may be biased for the samples located in the low-probability tail regions that are critical to power system risk assessment. To resolve this issue, the original high-fidelity power system model is adopted to fine-tune the estimation results of low-fidelity surrogates by reevaluating only a small portion of the samples. This multifidelity model approach greatly improves the computational efficiency of the traditional Monte Carlo method used in computing the risk-event probabilities under rare events without sacrificing computational accuracy. 
    more » « less
  2. In a landmark 1981 paper, Valiant and Brebner gave birth to the study of oblivious routing and, simultaneously, introduced its most powerful and ubiquitous method: Valiant load balancing (VLB). By routing messages through a randomly sampled intermediate node, VLB lengthens routing paths by a factor of two but gains the crucial property of obliviousness: it balances load in a completely decentralized manner, with no global knowledge of the communication pattern. Forty years later, with datacenters handling workloads whose communication pattern varies too rapidly to allow centralized coordination, oblivious routing is as relevant as ever, and VLB continues to take center stage as a widely used — and in some settings, provably optimal — way to balance load in the network obliviously to the traffic demands. However, the ability of the network to rapidly reconfigure its interconnection topology gives rise to new possibilities. In this work we revisit the question of whether VLB remains optimal in the novel setting of reconfigurable networks. Prior work showed that VLB achieves the optimal tradeoff between latency and guaranteed throughput. In this work we show that a strictly superior latency-throughput tradeoff is achievable when the throughput bound is relaxed to hold with high probability. The same improved tradeoff is also achievable with guaranteed throughput under time-stationary demands, provided the latency bound is relaxed to hold with high probability and that the network is allowed to be semi-oblivious, using an oblivious (randomized) connection schedule but demand-aware routing. We prove that the latter result is not achievable by any fully-oblivious reconfigurable network design, marking a rare case in which semi-oblivious routing has a provable asymptotic advantage over oblivious routing. Our results are enabled by a novel oblivious routing scheme that improves VLB by stretching routing paths the minimum possible amount — an additive stretch of 1 rather than a multiplicative stretch of 2 — yet still manages to balance load with high probability when either the traffic demand matrix or the network’s interconnection schedule are shuffled by a uniformly random permutation. To analyze our routing scheme we prove an exponential tail bound which may be of independent interest, concerning the distribution of values of a bilinear form on an orbit of a permutation group action. 
    more » « less
  3. Abstract Data analyses in particle physics rely on an accurate simulation of particle collisions and a detailed simulation of detector effects to extract physics knowledge from the recorded data. Event generators together with ageant-based simulation of the detectors are used to produce large samples of simulated events for analysis by the LHC experiments. These simulations come at a high computational cost, where the detector simulation and reconstruction algorithms have the largest CPU demands. This article describes how machine-learning (ML) techniques are used to reweight simulated samples obtained with a given set of parameters to samples with different parameters or samples obtained from entirely different simulation programs. The ML reweighting method avoids the need for simulating the detector response multiple times by incorporating the relevant information in a single sample through event weights. Results are presented for reweighting to model variations and higher-order calculations in simulated top quark pair production at the LHC. This ML-based reweighting is an important element of the future computing model of the CMS experiment and will facilitate precision measurements at the High-Luminosity LHC. 
    more » « less
  4. Deep learning-based object detection algorithms enable the simultaneous classification and localization of any number of objects in image data. Many of these algorithms are capable of operating in real-time on high resolution images, attributing to their widespread usage across many fields. We present an end-to-end object detection pipeline designed for rare event searches for the Migdal effect, at real-time speeds, using high-resolution image data from the scientific CMOS camera readout of the MIGDAL experiment. The Migdal effect in nuclear scattering, critical for sub-GeV dark matter searches, has yet to be experimentally confirmed, making its detection a primary goal of the MIGDAL experiment. The Migdal effect forms a composite rare event signal topology consisting of an electronic and nuclear recoil sharing the same vertex. Crucially, both recoil species are commonly observed in isolation in the MIGDAL experiment, enabling us to train YOLOv8, a state-of-the-art object detection algorithm, on real data. Topologies indicative of the Migdal effect can then be identified in science data via pairs of neighboring or overlapping electron and nuclear recoils. Applying selections to real data that retain 99.7% signal acceptance in simulations, we demonstrate our pipeline to reduce a sample of 20 million recorded images to fewer than 1000 frames, thereby transforming a rare search into a much more manageable search. More broadly, we discuss the applicability of using object detection to enable data-driven machine learning training for other rare event search applications such as neutrinoless double beta decay searches and experiments imaging exotic nuclear decays. Published by the American Physical Society2025 
    more » « less
  5. Fine-grained soils subjected to seismic loading often exhibit instability or failure of slopes, foundations, and embankments. To understand the behavior of clay soils under multiple earthquake loads, kaolinite samples were prepared and tested in the laboratory using a cyclic simple shear device. Each sample was subjected to two cyclic events separated by different degrees of reconsolidation periods to simulate different levels of excess pore water pressure dissipation. The results indicated that the degree to which excess pore water pressure generated during the first cyclic event was dissipated affected the cyclic resistance of the soil during the second cyclic event. The post-cyclic undrained shear strength was also found to be a function of the degree to which excess pore water pressure from the first cyclic load was allowed to dissipate prior to the application of the second cyclic load. 
    more » « less