skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2026

Title: Representative feature contributions when predicting the seismic collapse of steel buildings using an ensemble neural network
This study implements an ensemble neural network (ENN) to obtain representative and stable feature importance contributions to collapse prediction of steel moment resisting frames (SMRFs). The feature importance assessment includes global sensitivity analyses (GSAs) and feature extraction techniques. To construct the ENN, hundreds of neural network (NN) architectures are generated and an elite set of 50 NNs is initially obtained using a multi-criteria decision analysis (MCDA). A final elite set of 50 NNs is generated after applying a genetic algorithm to the initial elite set, which undergoes several iterations of crossover and mutation. To generate the dataset of SMRF collapse status, thousands of nonlinear time history analyses are carried out on frame systems ranging from 2 to 20 stories. The frames are based on five SMRF baseline systems with variability in input parameters that are randomly selected.  more » « less
Award ID(s):
2121169
PAR ID:
10656311
Author(s) / Creator(s):
; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Structures
Volume:
82
Issue:
C
ISSN:
2352-0124
Page Range / eLocation ID:
110736
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This study applies machine learning (ML) models to predict the collapse limit state of steel moment resisting frame (SMRF) buildings, considering uncertainties in system parameters and input ground motion characteristics. Structural global collapse is affected by a large number of linear and nonlinear system parameters. One of the main goals of the study is to find the effectiveness of ML methods to predict collapse, as the number of system’s features is reduced. Because of the lack of sufficient experimental data, an ML approach is followed in which three code-compliant SMRF buildings of varying heights (2, 4 and 8 stories), are evaluated up to the collapse limit state, using nonlinear time history analyses. Variability in system parameters and ground motions, as well as potential correlation among some of the parameters, is considered to generate a database of more than 19,000 realizations of collapsed and non-collapsed systems. The ML models are trained and tested with this database, and the efficiency of the models is categorized using different metrics, such as accuracy, F1-score, precision, and recall. Six different ML classification-based techniques are employed to predict collapse, finding that boosting algorithms (eg, AdaBoost and XGBoost) are the best methods for collapse status classification of the evaluated structural systems. Permutation feature importance is applied to identify the main contributors to collapse. The ML models are then retrained using less features, considering first removal of nonlinear deteriorating parameters, and then removal of the hardening nonlinear parameters. 
    more » « less
  2. Zero knowledge Neural Networks draw increasing attention for guaranteeing computation integrity and privacy of neural networks (NNs) based on zero-knowledge Succinct Non-interactive ARgument of Knowledge (zkSNARK) security scheme. However, the performance of zkSNARK NNs is far from optimal due to the million-scale circuit computation with heavy scalar-level dependency. In this paper, we propose a type-based optimizing framework for efficient zero-knowledge NN inference, namely ZENO (ZEro knowledge Neural network Optimizer). We first introduce ZENO language construct to maintain high-level semantics and the type information (e.g., privacy and tensor) for allowing more aggressive optimizations. We then propose privacytype driven and tensor-type driven optimizations to further optimize the generated zkSNARK circuit. Finally, we design a set of NN-centric system optimizations to further accelerate zkSNARK NNs. Experimental results show that ZENO achieves up to 8.5× end-to-end speedup than state-of-the-art zkSNARK NNs. We reduce proof time for VGG16 from 6 minutes to 48 seconds, which makes zkSNARK NNs practical. 
    more » « less
  3. Machine learning (ML) techniques were generated for different information levels to identify the minimum set of system parameters required for predicting collapse and maximum interstory drift (SDR_max) of steel moment resisting frame (SMRF) buildings. Five baseline modern SMRFs were evaluated under seismic loading with varying system and ground motion (GM) parameters to generate a database. Classification and regression-based ML models were tested at three system information levels to predict collapse and SDR_max, respectively. The ML predictions were mainly controlled by GM parameters and were relatively insensitive to system parameters defining nonlinear behavior, such as spring backbone curve features. 
    more » « less
  4. Neural collapse provides an elegant mathematical characterization of learned last layer representations (a.k.a. features) and classifier weights in deep classification models. Such results not only provide insights but also motivate new techniques for improving practical deep models. However, most of the existing empirical and theoretical studies in neural collapse focus on the case that the number of classes is small relative to the dimension of the feature space. This paper extends neural collapse to cases where the number of classes are much larger than the dimension of feature space, which broadly occur for language models, retrieval systems, and face recognition applications. We show that the features and classifier exhibit a generalized neural collapse phenomenon, where the minimum one-vs-rest margins is maximized. We provide empirical study to verify the occurrence of generalized neural collapse in practical deep neural networks. Moreover, we provide theoretical study to show that the generalized neural collapse provably occurs under unconstrained feature model with spherical constraint, under certain technical conditions on feature dimension and number of classes. 
    more » « less
  5. We study the optimization of wide neural networks (NNs) via gradient flow (GF) in setups that allow feature learning while admitting non-asymptotic global convergence guarantees. First, for wide shallow NNs under the mean-field scaling and with a general class of activation functions, we prove that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF. Building upon this analysis, we study a model of wide multi-layer NNs whose second-to-last layer is trained via GF, for which we also prove a linear-rate convergence of the training loss to zero, but regardless of the input dimension. We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart. 
    more » « less