Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 1, 2026
-
Free, publicly-accessible full text available February 26, 2026
-
Free, publicly-accessible full text available February 26, 2026
-
Free, publicly-accessible full text available February 26, 2026
-
The longwall mining method is designed to optimize coal extraction through controlled roof caving, which inevitably induces seismicity. This research employs a distributed acoustic sensing (DAS) system incorporating a fire-safe fiber-optic cable strategically installed underground within an operational longwall coal mine. Despite lower sensitivity than traditional seismometers, DAS sensing technology benefits from dense sensor spacing and close proximity to the active face, where many microseismic events occur. To automatically detect seismic events within the voluminous DAS data records, we employ convolutional autoencoder deep learning models that can be used for anomaly (potential seismic event) detection in power spectral density (PSD) images of DAS recordings. The kernel density estimation (KDE) technique is used to calculate the probability density function (PDF) for the density scores of the latent space (representation of compressed data). We then use this calculated parameter as a threshold to distinguish between the PSD associated with background noise and with potential seismic events. The DAS monitoring system in conjunction with the developed deep learning model could enhance longwall coal mining safety and efficiency by offering valuable data from its densely deployed multichannel sensors near mining operations.more » « less
-
Human–machine partnerships at the exascale: exploring simulation ensembles through image databasesThe explosive growth in supercomputers capacity has changed simulation paradigms. Simulations have shifted from a few lengthy ones to an ensemble of multiple simulations with varying initial conditions or input parameters. Thus, an ensemble consists of large volumes of multi-dimensional data that could go beyond the exascale boundaries. However, the disparity in growth rates between storage capabilities and computing resources results in I/O bottlenecks. This makes it impractical to utilize conventional postprocessing and visualization tools for analyzing such massive simulation ensembles. In situ visualization approaches alleviate I/O constraints by saving predetermined visualizations in image databases during simulation. Nevertheless, the unavailability of output raw data restricts the flexibility of post hoc exploration of in situ approaches. Much research has been conducted to mitigate this limitation, but it falls short when it comes to simultaneously exploring and analyzing parameter and ensemble spaces. In this paper, we propose an expert-in-the-loop visual exploration analytic approach. The proposed approach leverages: feature extraction, deep learning, and human expert–AI collaboration techniques to explore and analyze image-based ensembles. Our approach utilizes local features and deep learning techniques to learn the image features of ensemble members. The extracted features are then combined with simulation input parameters and fed to the visualization pipeline for in-depth exploration and analysis using human expert + AI interaction techniques. We show the effectiveness of our approach using several scientific simulation ensembles.more » « less
An official website of the United States government

Full Text Available