skip to main content


This content will become publicly available on June 28, 2024

Title: Optimal Sampling Methodologies for High-rate Structural Twinning
In high-rate structural health monitoring, it is crucial to quickly and accurately assess the current state of a component under dynamic loads. State information is needed to make informed decisions about timely interventions to prevent damage and extend the structure’s life. In previous studies, a dynamic reproduction of projectiles in ballistic environments (DROPBEAR) testbed was used to evaluate the accuracy of state estimation techniques through dynamic analysis. This paper extends previous research by incorporating the local eigenvalue modification procedure (LEMP) and data fusion techniques to create a more robust state estimate using optimal sampling methodologies. The process of estimating the state involves taking a measured frequency response of the structure, proposing frequency response profiles, and accepting the most similar profile as the new mean for the position estimate distribution. Utilizing LEMP allows for a faster approximation of the proposed model with linear time complexity, making it suitable for 2D or sequential damage cases. The current study focuses on two proposed sampling methodology refinements: distilling the selection of candidate test models from the position distribution and applying a Kalman filter after the distribution update to find the mean. Both refinements were effective in improving the position estimate and the structural state accuracy, as shown by the time response assurance criterion and the signal-to-noise ratio with up to 17% improvement. These two metrics demonstrate the benefits of incorporating data fusion techniques into the high-rate state identification process.  more » « less
Award ID(s):
2237696
NSF-PAR ID:
10489197
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
2023 26th International Conference on Information Fusion (FUSION)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Location:
Charleston, SC, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Lung or heart sound classification is challenging due to the complex nature of audio data, its dynamic properties of time, and frequency domains. It is also very difficult to detect lung or heart conditions with small amounts of data or unbalanced and high noise in data. Furthermore, the quality of data is a considerable pitfall for improving the performance of deep learning. In this paper, we propose a novel feature-based fusion network called FDC-FS for classifying heart and lung sounds. The FDC-FS framework aims to effectively transfer learning from three different deep neural network models built from audio datasets. The innovation of the proposed transfer learning relies on the transformation from audio data to image vectors and from three specific models to one fused model that would be more suitable for deep learning. We used two publicly available datasets for this study, i.e., lung sound data from ICHBI 2017 challenge and heart challenge data. We applied data augmentation techniques, such as noise distortion, pitch shift, and time stretching, dealing with some data issues in these datasets. Importantly, we extracted three unique features from the audio samples, i.e., Spectrogram, MFCC, and Chromagram. Finally, we built a fusion of three optimal convolutional neural network models by feeding the image feature vectors transformed from audio features. We confirmed the superiority of the proposed fusion model compared to the state-of-the-art works. The highest accuracy we achieved with FDC-FS is 99.1% with Spectrogram-based lung sound classification while 97% for Spectrogram and Chromagram based heart sound classification. 
    more » « less
  2. In this work, we present a novel approach to real-time tracking of full-chip heatmaps for commercial off-the-shelf microprocessors based on machine-learning. The proposed post-silicon approach, named RealMaps, only uses the existing embedded temperature sensors and workload-independent utilization information, which are available in real-time. Moreover, RealMaps does not require any knowledge of the proprietary design details or manufacturing process-specific information of the chip. Consequently, the methods presented in this work can be implemented by either the original chip manufacturer or a third party alike, and is aimed at supplementing, rather than substituting, the temperature data sensed from the existing embedded sensors. The new approach starts with offline acquisition of accurate spatial and temporal heatmaps using an infrared thermal imaging setup while nominal working conditions are maintained on the chip. To build the dynamic thermal model, a temporal-aware long-short-term-memory (LSTM) neutral network is trained with system-level features such as chip frequency, instruction counts, and other high-level performance metrics as inputs. Instead of a pixel-wise heatmap estimation, we perform 2D spatial discrete cosine transformation (DCT) on the heatmaps so that they can be expressed with just a few dominant DCT coefficients. This allows for the model to be built to estimate just the dominant spatial features of the 2D heatmaps, rather than the entire heatmap images, making it significantly more efficient. Experimental results from two commercial chips show that RealMaps can estimate the full-chip heatmaps with 0.9C and 1.2C root-mean-square-error respectively and take only 0.4ms for each inference which suits well for real-time use. Compared to the state of the art pre-silicon approach, RealMaps shows similar accuracy, but with much less computational cost. 
    more » « less
  3. null (Ed.)
    Abstract

    Many structures are subjected to varying forces, moving boundaries, and other dynamic conditions. Whether part of a vehicle, building, or active energy mitigation device, data on such changes can represent useful knowledge, but also presents challenges in its collection and analysis. In systems where changes occur rapidly, assessment of the system’s state within a useful time span is required to enable an appropriate response before the system’s state changes further. Rapid state estimation is especially important but poses unique difficulties.

    In determining the state of a structural system subjected to high-rate dynamic changes, measuring the frequency response is one method that can be used to draw inferences, provided the system is adequately understood and defined. The work presented here is the result of an investigation into methods to determine the frequency response, and thus state, of a structure subjected to high-rate boundary changes in real-time.

    In order to facilitate development, the Air Force Research Laboratory created the DROPBEAR, a testbed with an oscillating beam subjected to a continuously variable boundary condition. One end of the beam is held by a stationary fixed support, while a pinned support is able to move along the beam’s length. The free end of the beam structure is instrumented with acceleration, velocity, and position sensors measuring the beam’s vertical axis. Direct position measurement of the pin location is also taken to provide a reference for comparison with numerical models.

    This work presents a numerical investigation into methods for extracting the frequency response of a structure in real-time. An FFT based method with a rolling window is used to track the frequency of a data set generated to represent the range of the DROPBEAR, and is run with multiple window lengths. The frequency precision and latency of the FFT method is analyzed in each configuration. A specialized frequency extraction technique, Delayed Comparison Error Minimization, is implemented with parameters optimized for the frequency range of interest. The performance metrics of latency and precision are analyzed and compared to the baseline rolling FFT method results, and applicability is discussed.

     
    more » « less
  4. The National Space Weather Action Plan, NERC Reliability Standard TPL-007-1,2 associated FERC Orders 779, 830 and subsequent actions, emerging standard TPL-007-3, as well as Executive Order 13744 have prepared the regulatory framework and roadmap for assessing and mitigating the impact on critical infrastructure from space weather. These actions have resulted in an emerging set of benchmarks against which the statistical probability of damage to critical components such as power transmission system high-voltage transformers can be assessed; for the first time the impacts on the intensity of geomagnetically induced currents (GICs) due to the spatial variability of the geomagnetic field and of the Earth’s electrical conductivity structure can be examined systematically. While at present not a strict requirement of the existing reliability standards, there is growing evidence that the strongly three-dimensional nature of the electrical conductivity structure of the North American crust and mantle (heretofore ‘ground conductivity’) has a first-order impact on GIC intensity, with considerable local and regional variability. The strongly location dependent ground electric field intensification and attenuation due to 3-D ground conductivity variations has an equivalent impact on assessment of risk to critical infrastructure due to HEMP (E3 phase) sources of geomagnetic disturbances (GMDs) as it does for natural GMDs. From 2006-2018, Oregon State University (OSU) under NSF EarthScope Program support, installed and acquired ground electric and magnetic field time series (magnetotelluric, or MT) data on a grid of station locations spaced ~70-km apart, at 1161 long-period MT stations covering nearly ⅔ of CONUS. The US Geological Survey completed 47 additional MT stations using functionally identical instrumentation, and the two data sets were merged and made available in the public domain. OSU and its project collaborators have also collected hundreds of wider frequency bandwidth, more densely-spaced MT station data under other project support, and these have been or will be released for public access in the near future. NSF funding was not available to make possible collection of EarthScope MT data 1/3 of CONUS in the southern tier of states, in a band from central California in the west to Alabama in the east and extending along the Gulf Coast and Deep South. OSU, with NASA support just received, plans to complete MT station installation in the remainder of California this year, and with additional support both anticipated and proposed, we hope to complete the MT array in the remainder of CONUS. For this first time this will provide national-scale 3-D electrical conductivity/MT impedance data throughout the US portion of the contiguous North American power grid. Complementary planning and proposal efforts are underway in Canada, including collaborations between OSU, Athabasca University and other Canadian academic and industry groups. In the present work, we apply algorithms we have developed to make use of real-time streams of US Geological Survey, Natural Resources Canada (and other) magnetic observatory data, and the EarthScope and other MT data sets to provide quasi-real time predictions of the geomagnetically induced voltages at high-voltage transmission system transformers/power buses. This goes beyond the statistical benchmarking process currently encapsulated in NERC reliability standards. We seek initially to provide real-time information to power utility control room operators, in the form of a heat map showing which assets are likely experiencing stress due to induced currents. These assessments will be ground-truthed against transmission system sensor data (PMUs, GIC monitors, voltage waveforms and harmonics where available), and by applying machine learning methods we hope to extend this approach to transmission systems that have sparse or non-existent GIC monitoring sensor infrastructure. Ultimately by incorporating predictive models of the geomagnetic field using satellite data as inputs rather than real-time ground magnetic field measurements, a near-term probabilistic assessment of risk to transformers may be possible, ideally providing at least a 15-minute forecast to utility operators. There has been a concerted effort by NOAA to develop a real-time geomagnetically induced ground electric field data product that makes use of our EarthScope MT data, which includes the strong impacts on GICs due to 3-D ground conductivity structure. Both OSU and the USGS have developed methods to determine the GIC-related voltages at substations by integrating the ground electric fields along power transmission line paths. Under National Science Foundation support, the present team of investigators is taking the next step, of applying the GIC-related voltages as inputs to quasi-real time power flow models of the power transmission grid in order to obtain realistic and verifiable predictions of the intensity of induced GICs, the reactive power loss due to GICs, and of GIC effects on the current and voltage waveforms, such as the harmonic distortion. As we work toward integration of predicted induced substation voltages with power flow models, we’ve modified the RTS-GMLC (Reliability Test System Grid Modernization Lab Consortium) test case (https://github.com/GridMod/RTS-GMLC) by moving the geographic location of the case to central Oregon. With the assistance of LANL we have the complete AC and DC network of the RTS-GMLC case, and we are working to integrate the complete case information into Julia (using the PowerModels and PowerModelsGMD packages of LANL), or into PowerWorld. Along a parallel track, we have performed GIC voltage calculations using our geophysical algorithm for a realistic GMD event (Halloween event) for the test case, resulting in GIC transmission line voltages that can be added into our power system model. We’ll discuss our progress in integrating the geophysical estimates of transformer voltages and our DC model using LANL's Julia and PowerModelsGMD package, for power flow simulations on the test case, and to determine the GIC flows and possible impacts on the power waveforms in the system elements. 
    more » « less
  5. SUMMARY

    We present results on radiated seismic energy during simulations of dynamic ruptures in a continuum damage-breakage rheological model incorporating evolution of damage within the seismic source region. The simulations vary in their initial damage zone width and rate of damage diffusion with parameter values constrained by observational data. The radiated energy recorded at various positions around the source is used to calculate seismic potency and moment. We also calculate the normalized radiated energy from the source, in a way that allows comparing between results of different simulations and highlighting aspects related to the dilatational motion during rupture. The results show that at high-frequencies, beyond the dominant frequency of the source ($( {f > 3{f}_d} )$, the damage process produces an additional burst of energy mainly in the Pwaves. This eccess of high-frequency energy is observed by comparing the radiated energy to a standard Brune's model with a decay slope of the radiated energy of n = 2. While the Swaves show good agreement with the n = 2 slope, the Pwaves have a milder slope of n = 1.75 or less depending on the damage evolution at the source. In the used damage-breakage rheology, the rate of damage diffusivity governs the damage evolution perpendicular to the rupture direction and dynamic changes of the damage zone width. For increasing values of damage diffusivity, dilatational energy becomes more prominent during rupture, producing a high-frequency dilatational signature within the radiation pattern. The high-frequency radiation pattern of the Pwaves includes two main lobes perpendicular to the rupture direction, reflecting high-rate local tensile cracking during the overall shear rupture process. Analysing the possible existence and properties of such high-frequency radiation pattern in observed Pwaves could provide important information on earthquake source processes.

     
    more » « less