skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2131111

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract In high seismic risk regions, it is important for city managers and decision makers to create programs to mitigate the risk for buildings. For large cities and regions, a mitigation program relies on accurate information of building stocks, that is, a database of all buildings in the area and their potential structural defects, making them vulnerable to strong ground shaking. Structural defects and vulnerabilities could manifest via the building's appearance. One such example is the soft‐story building—its vertical irregularity is often observable from the facade. This structural type can lead to severe damage or even collapse during moderate or severe earthquakes. Therefore, it is critical to screen large building stock to find these buildings and retrofit them. However, it is usually time‐consuming to screen soft‐story structures by conventional methods. To tackle this issue, we used full image classification to screen them out from street view images in our previous study. However, full image classification has difficulties locating buildings in an image, which leads to unreliable predictions. In this paper, we developed an automated pipeline in which we segment street view images to identify soft‐story buildings. However, annotated data for this purpose is scarce. To tackle this issue, we compiled a dataset of street view images and present a strategy for annotating these images in a semi‐automatic way. The annotated dataset is then used to train an instance segmentation model that can be used to detect all soft‐story buildings from unseen images. 
    more » « less
  2. Abstract Nonlinear response history analysis (NLRHA) is generally considered to be a reliable and robust method to assess the seismic performance of buildings under strong ground motions. While NLRHA is fairly straightforward to evaluate individual structures for a select set of ground motions at a specific building site, it becomes less practical for performing large numbers of analyses to evaluate either (1) multiple models of alternative design realizations with a site‐specific set of ground motions, or (2) individual archetype building models at multiple sites with multiple sets of ground motions. In this regard, surrogate models offer an alternative to running repeated NLRHAs for variable design realizations or ground motions. In this paper, a recently developed surrogate modeling technique, called probabilistic learning on manifolds (PLoM), is presented to estimate structural seismic response. Essentially, the PLoM method provides an efficient stochastic model to develop mappings between random variables, which can then be used to efficiently estimate the structural responses for systems with variations in design/modeling parameters or ground motion characteristics. The PLoM algorithm is introduced and then used in two case studies of 12‐story buildings for estimating probability distributions of structural responses. The first example focuses on the mapping between variable design parameters of a multidegree‐of‐freedom analysis model and its peak story drift and acceleration responses. The second example applies the PLoM technique to estimate structural responses for variations in site‐specific ground motion characteristics. In both examples, training data sets are generated for orthogonal input parameter grids, and test data sets are developed for input parameters with prescribed statistical distributions. Validation studies are performed to examine the accuracy and efficiency of the PLoM models. Overall, both examples show good agreement between the PLoM model estimates and verification data sets. Moreover, in contrast to other common surrogate modeling techniques, the PLoM model is able to preserve correlation structure between peak responses. Parametric studies are conducted to understand the influence of different PLoM tuning parameters on its prediction accuracy. 
    more » « less
  3. Stochastic emulation techniques represent a specialized surrogate modeling branch that is appropriate for applications for which the relationship between input and output is stochastic in nature. Their objective is to address the stochastic uncertainty sources by directly predicting the output distribution for a given input. An example of such application, and the focus of this contribution, is the estimation of structural response (engineering demand parameter) distribution in seismic risk assessment. In this case, the stochastic uncertainty originates from the aleatoric variability in the seismic hazard description. Note that this is a different uncertainty-source than the potential parametric uncertainty associated with structural characteristics or explanatory variables for the seismic hazard (for example, intensity measures), that are treated as the parametric input in surrogate modeling context. The key challenge in stochastic emulation pertains to addressing heteroscedasticity in the output variability. Relevant approaches to-date for addressing this challenge have focused on scalar outputs. In contrast, this paper focuses on the multi-output stochastic emulation problem and presents a methodology for predicting the output correlation matrix, while fully addressing heteroscedastic characteristics. This is achieved by introducing a Gaussian Process (GP) regression model for approximating the components of the correlation matrix, and coupling this approximation with a correction step to guarantee positive definite properties for the resultant predictions. For obtaining the observation data to inform the GP calibration, different approaches are examined, relying-or-not on the existence of replicated samples for the response output. Such samples require that, for a portion of the training points, simulations are repeated for the same inputs and different descriptions of the stochastic uncertainty. This information can be readily used to obtain observation for the response statistics (correlation or covariance in this instance) to inform the GP development. An alternative approach is to use as observations noisy covariance samples based on the sample deviations from a primitive mean approximation. These different observation variants lead to different GP variants that are compared within a comprehensive case study. A computational framework for integrating the correlation matrix approximation within the stochastic emulation for the marginal distribution approximation of each output component is also discussed, to provide the joint response distribution approximation. 
    more » « less
    Free, publicly-accessible full text available April 17, 2026
  4. We present an international comparative analysis of simulated 3D tsunami debris hazards, applying three state-of-the-art numerical methods: the Material Point Method (MPM, ClaymoreUW, multi-GPU), Smoothed Particle Hydrodynamics (SPH, DualSPHysics, GPU), and Eulerian grid-based computational fluid dynamics (Simcenter STAR-CCM+, multi-CPU/GPU). Three teams, two from the United States and one from Germany, apply their unique expertise to shed light on the state of advanced tsunami debris modeling in both open source and professional software. A mutually accepted and meaningful benchmark is set as 1:40 Froude scale model experiments of shipping containers mobilized into and amidst a port setting with simplified and generic structures, closely related to the seminal Tohoku 2011 tsunami case histories which majorly affected seaports. A sophisticated wave flume at Waseda University in Tokyo, Japan, hosted the experiments as reported by Goseberget al. (2016b). Across dozens of trials, an elongated vacuum-chamber wave surges and spills over a generic harbor apron, mobilizing 3–6 hollow debris-modeling sea containers-, in 1–2 vertical layers against friction. One to two rows of 5 square obstacles are placed upstream or downstream of the debris, with widths and gaps of 0.66x and 2.2x of debris length, respectively. The work reports and compares results on the long wave generation from a vacuum-controlled tsunami wave maker, longitudinal displacement of debris forward and back, lateral spreading angle of debris, interactions of stacked debris, and impact forces measured with debris accelerometers and/or obstacle load-cells. Each team writes a foreword on their digital twin model, which are all open-sourced. Then, preliminary statistical analysis contrasts simulations originating off different numerical methods, and simulations with experiments. Afterward, team’s give value propositions for their numerical tool. Finally, a transparent cross-interrogation of results highlights the strengths of each respective method. 
    more » « less
    Free, publicly-accessible full text available December 9, 2025
  5. The detailed evaluation of expected losses and damage experienced by structural and nonstructural components is a fundamental part of performance-based seismic design and assessment. The FEMA P-58 methodology represents the state of the art in this area. Increasing interest in improving structural performance and community resilience has led to widespread adoption of this methodology and the library of component models published with it. This study focuses on the modeling of economies of scale for repair cost calculation and specifically highlights the lack of a definition for aggregate damage, a quantity with considerable influence on the component repair costs. The article illustrates the highly variable and often substantial impact of damage aggregation that can alter total repair costs by more than 25%. Four so-called edge cases representing different damage aggregation methods are introduced to investigate which components experience large differences in their repair costs and under what circumstances. A three-step evaluation strategy is proposed that allows engineers to quickly evaluate the potential impact of damage aggregation on a specific performance assessment. This helps users of currently available assessment tools to recognize and communicate this uncertainty even when the tools they use only support one particular damage aggregation method. A case study of a 9-story building illustrates the proposed strategy and the impact of this ambiguity on the performance of a realistic structure. The article concludes with concrete recommendations toward the development of a more sophisticated model for repair consequence calculation. 
    more » « less
  6. Existing building recognition methods, exemplified by BRAILS, utilize supervised learning to extract information from satellite and street-view images for classification and segmentation. However, each task module requires human-annotated data, hindering the scalability and robustness to regional variations and annotation imbalances. In response, we propose a new zero-shot workflow for building attribute extraction that utilizes large-scale vision and language models to mitigate reliance on external annotations. The proposed workflow contains two key components: image-level captioning and segment-level captioning for the building images based on the vocabularies pertinent to structural and civil engineering. These two components generate descriptive captions by computing feature representations of the image and the vocabularies, and facilitating a semantic match between the visual and textual representations. Consequently, our framework offers a promising avenue to enhance AI-driven captioning for building attribute extraction in the structural and civil engineering domains, ultimately reducing reliance on human annotations while bolstering performance and adaptability. 
    more » « less