skip to main content


Title: Data-driven seismic response prediction of structural components

Lateral stiffness of structural components, such as reinforced concrete (RC) columns, plays an important role in resisting the lateral earthquake loads. The lateral stiffness relates the lateral force to the lateral deformation, having a critical effect on the accuracy of the lateral seismic response predictions. The classical methods (e.g. fiber beam–column model) to estimate the lateral stiffness require calculations from section, element, and structural levels, which is time-consuming. Moreover, the shear deformation and bond-slip effect may also need to be included to more accurately calculate the lateral stiffness, which further increases the modeling difficulties and the computational cost. To reduce the computational time and enhance the accuracy of the predictions, this article proposes a novel data-driven method to predict the laterally seismic response based on the estimated lateral stiffness. The proposed method integrates the machine learning (ML) approach with the hysteretic model, where ML is used to compute the parameters that govern the nonlinear properties of the lateral response of target structural components directly from a training set composed of experimental data (i.e. data-driven procedure) and the hysteretic model is used to directly output the lateral stiffness based on the computed parameters and then to perform the seismic analysis. We apply the proposed method to predict the lateral seismic response of various types of RC columns subjected to cyclic loading and ground motions. We present the detailed model formulation for the application, including the developments of a modified hysteretic model, a hybrid optimization algorithm, and two data-driven seismic response solvers. The results predicted by the proposed method are compared with those obtained by classical methods with the experimental data serving as the ground truth, showing that the proposed method significantly outperforms the classical methods in both generalized prediction capabilities and computational efficiency.

 
more » « less
NSF-PAR ID:
10306313
Author(s) / Creator(s):
 ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Earthquake Spectra
Volume:
38
Issue:
2
ISSN:
8755-2930
Page Range / eLocation ID:
p. 1382-1416
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Loss of operation or devastating damage to buildings and industrial structures, as well as equipment housed in them, has been observed due to earthquake-induced vibrations. A common source of operational downtime is due to the performance reduction of vital equipment, which are sensitive to the total transmitted acceleration. A well-known method of protecting such equipment is seismic isolation of the equipment itself (or a group of equipment), as opposed to the entire structure due to the lower cost of implementation. The first objective of this dissertation is assessing a rolling isolation system (RIS) based on existing design guidelines for telecommunications equipment. A discrepancy is observed between the required response spectrum (RRS) and the one and only accelerogram recommended in the guideline. Several filters are developed to generate synthetic accelerograms that are compatible with the RRS. The generated accelerograms are used for probabilistic assessment of a RIS that is acceptable per the guideline. This assessment reveals large failure probability due to displacement demands in excess of the displacement capacity of the RIS. When the displacement demands on an isolation system are in excess of its capacity, impacts result in spikes in transmitted acceleration. Therefore, the second objective of this dissertation is to design impact prevention/mitigation mechanisms. A dual-mode system is proposed where the behavior changes when the displacement exceeds a predefined threshold. A new piecewise optimal control approach is developed and applied to find the best possible mechanism for the region beyond the threshold. By utilizing the designed curves obtained from the proposed optimal control procedure, a Kelvin-Voigt device is tuned for illustrative purposes. On the other hand, the preference for protecting equipment decreases as the earthquake intensity increases. In extreme seismic loading, the response mitigation of the primary structure (i.e., life safety and collapse prevention) is of greater concern than protecting isolated equipment. Therefore, the third objective of this dissertation is to develop an innovative dual-mode system that can behave as equipment isolation under low to moderate seismic loading and passively transition to behave as a vibration absorber for the primary structure under extreme seismic loading. To reduce the computational cost of simulating a large linear elastic structure with nonlinear attachments (i.e., equipment isolation with cubic hardening nonlinearity), a reduced order modeling method is introduced that can capture the behavior of such nonlinear coupled systems. The method is applied to study the feasibility of dual-mode vibration isolation/absorber. To this end, nonlinear transmissibility curves for the roof displacement and isolated mass total acceleration are developed from the steady-state responses of dual-mode systems using the harmonic balanced method. The final objective of this dissertation is to extend the reduced order modeling method developed for linear elastic structure with nonlinear attachment to inelastic structures (without attachments). The new inelastic model condensation (IMC) method uses the modal properties of the full structural model (in the elastic range) to construct a linear reduced order model in conjunction with a hysteresis model to capture the hysteretic inter-story restoring forces. The parameters of these hysteretic forces are easily tuned, in order to fit the inelastic behavior of the condensed structure to that of the full model under a variety of simple loading scenarios. The fidelity of structural models condensed in this way is demonstrated via simulation for different ground motion intensities on three different building structures with various heights. The simplicity, accuracy, and efficiency of this approach could significantly alleviate the computational burden of performance-based earthquake engineering. 
    more » « less
  2. Abstract

    Machine learning (ML) techniques have become increasingly important in seismology and earthquake science. Lab‐based studies have used acoustic emission data to predict time‐to‐failure and stress state, and in a few cases, the same approach has been used for field data. However, the underlying physical mechanisms that allow lab earthquake prediction and seismic forecasting remain poorly resolved. Here, we address this knowledge gap by coupling active‐source seismic data, which probe asperity‐scale processes, with ML methods. We show that elastic waves passing through the lab fault zone contain information that can predict the full spectrum of labquakes from slow slip instabilities to highly aperiodic events. The ML methods utilize systematic changes in P‐wave amplitude and velocity to accurately predict the timing and shear stress during labquakes. The ML predictions improve in accuracy closer to fault failure, demonstrating that the predictive power of the ultrasonic signals improves as the fault approaches failure. Our results demonstrate that the relationship between the ultrasonic parameters and fault slip rate, and in turn, the systematically evolving real area of contact and asperity stiffness allow the gradient boosting algorithm to “learn” about the state of the fault and its proximity to failure. Broadly, our results demonstrate the utility of physics‐informed ML in forecasting the imminence of fault slip at the laboratory scale, which may have important implications for earthquake mechanics in nature.

     
    more » « less
  3. null (Ed.)
    Mechanics-based dynamic models are commonly used in the design and performance assessment of structural systems, and their accuracy can be improved by integrating models with measured data. This paper provides an overview of hierarchical Bayesian model updating which has been recently developed for probabilistic integration of models with measured data, while accounting for different sources of uncertainties and modeling errors. The proposed hierarchical Bayesian framework allows one to explicitly account for pertinent sources of variability such as ambient temperatures and/or excitation amplitudes, as well as modeling errors, and therefore yields more realistic predictions. The paper reports observations from applications of hierarchical approach to three full-scale civil structural systems, namely (1) a footbridge, (2) a 10-story reinforced concrete (RC) building, and (3) a damaged 2-story RC building. The first application highlights the capability of accounting for temperature effects within the hierarchical framework, while the second application underlines the effects of considering bias for prediction error. Finally, the third application considers the effects of excitation amplitude on structural response. The findings underline the importance and capabilities of the hierarchical Bayesian framework for structural identification. Discussions of its advantages and performance over classical deterministic and Bayesian model updating methods are provided. 
    more » « less
  4. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  5. Accurate characterization of the mechanical properties of the human brain at both microscopic and macroscopic length scales is a critical requirement for modeling of traumatic brain injury and brain folding. To date, most experimental studies that employ classical tension/compression/shear tests report the mechanical properties of the brain averaged over both the gray and white matter within the macroscopic regions of interest. As a result, there is a missing correlation between the independent mechanical properties of the microscopic constituent elements and the composite bulk macroscopic mechanical properties of the tissue. This microstructural computational study aims to inversely predict the hyperelastic mechanical properties of the axonal fibers and their surrounding extracellular matrix (ECM) from the bulk tissue's mechanical properties. We develop a representative volume element (RVE) model of the bulk tissue consisting of axonal fibers and ECM with the embedded element technique. A multiobjective optimization technique is implemented to calibrate the model and establish the independent mechanical properties of axonal fibers and ECM based on seven previously reported experimental mechanical tests for bulk white matter tissue from the corpus callosum. The result of the study shows that the discrepancy between the reported values for the elastic behavior of white matter in literature stems from the anisotropy of the tissue at the microscale. The shear modulus of the axonal fiber is seven times larger than the ECM, with axonal fibers that also show greater nonlinearity, contrary to the common assumption that both components exhibit identical nonlinear characteristics. Statement of significance The reported mechanical properties of white matter microstructure used in traumatic brain injury or brain mechanics studies vary widely, in some cases by up to two orders of magnitude. Currently, the material parameters of the white matter microstructure are identified by a single loading mode or ultimately two modes of the bulk tissue. The presented material models only define the response of the bulk and homogenized white matter at a macroscopic scale and cannot explicitly capture the connection between the material properties of microstructure and bulk structure. To fill this knowledge gap, our study characterizes the hyperelastic material properties of axonal fibers and ECM using microscale computational modeling and multiobjective optimization. The hyperelastic material properties for axonal fibers and ECM presented in this study are more accurate than previously proposed because they have been optimized using seven or six loading modes of the bulk tissue, which were previously limited to only two of the seven possible loading modes. As such, the predicted values with high accuracy could be used in various computational modeling studies. The systematic characterization of the material properties of the human brain tissue at both macro- and microscales will lead to more accurate computational predictions, which will enable a better understanding of injury criteria, and has a positive impact on the improved development of smart protection systems, and more accurate prediction of brain development and disease progression. 
    more » « less