skip to main content


Title: A stochastic programming approach to enhance the resilience of infrastructure under weather‐related risk
ABSTRACT

The presented methodology results in an optimal portfolio of resilience‐oriented resource allocation under weather‐related risks. The pre‐event mitigations improve the capacity of the transportation system to absorb shocks from future natural hazards, contributing to risk reduction. The post‐event recovery planning results in enhancing the system's ability to bounce back rapidly, promoting network resilience. Considering the complex nature of the problem due to uncertainty of hazards, and the impact of the pre‐event decisions on post‐event planning, this study formulates a nonlinear two‐stage stochastic programming (NTSSP) model, with the objective of minimizing the direct construction investment and indirect costs in both pre‐event mitigation and post‐event recovery stages. In the model, the first stage prioritizes a bridge group that will be retrofitted or repaired to improve the system's robustness and redundancy. The second stage elaborates the uncertain occurrence of a type of natural hazard with any potential intensity at any possible network location. The damaged state of the network is dependent on decisions made on first‐stage mitigation efforts. While there has been research addressing the optimization of pre‐event or post‐event efforts, the number of studies addressing two stages in the same framework is limited. Even such studies are limited in their application due to the consideration of small networks with a limited number of assets. The NTSSP model addresses this gap and builds a large‐scale data‐driven simulation environment. To effectively solve the NTSSP model, a hybrid heuristic method of evolution strategy with high‐performance parallel computing is applied, through which the evolutionary process is accelerated, and the computing time is reduced as a result. The NTSSP model is implemented in a test‐bed transportation network in Iowa under flood hazards. The results show that the NTSSP model balances the economy and efficiency on risk mitigation within the budgetary investment while constantly providing a resilient system during the full two‐stage course.

 
more » « less
Award ID(s):
1751844
NSF-PAR ID:
10395224
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer-Aided Civil and Infrastructure Engineering
Volume:
38
Issue:
4
ISSN:
1093-9687
Page Range / eLocation ID:
p. 411-432
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Computation of optimal recovery decisions for community resilience assurance post-hazard is a combinatorial decision-making problem under uncertainty. It involves solving a large-scale optimization problem, which is significantly aggravated by the introduction of uncertainty. In this paper, we draw upon established tools from multiple research communities to provide an effective solution to this challenging problem. We provide a stochastic model of damage to the water network (WN) within a testbed community following a severe earthquake and compute near-optimal recovery actions for restoration of the water network. We formulate this stochastic decision-making problem as a Markov Decision Process (MDP), and solve it using a popular class of heuristic algorithms known as rollout. A simulation-based representation of MDPs is utilized in conjunction with rollout and the Optimal Computing Budget Allocation (OCBA) algorithm to address the resulting stochastic simulation optimization problem. Our method employs non-myopic planning with efficient use of simulation budget. We show, through simulation results, that rollout fused with OCBA performs competitively with respect to rollout with total equal allocation (TEA) at a meagre simulation budget of 5-10% of rollout with TEA, which is a crucial step towards addressing large-scale community recovery problems following natural disasters. 
    more » « less
  2. Obeid, Iyad Selesnick (Ed.)
    Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describe our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEG channel separately. We use the hypotheses generated by the P1 model and create additional features that carry information about the detected events and their confidence. The P2 model uses these additional features and the LFCC features to learn the temporal and spatial aspects of the EEG signals using a hybrid convolutional neural network (CNN) and LSTM model. Finally, Phase 3 aggregates the results from both P1 and P2 before applying a final postprocessing step. The online system implements Phase 1 by taking advantage of the Linux piping mechanism, multithreading techniques, and multi-core processors. To convert Phase 1 into an online system, we divide the system into five major modules: signal preprocessor, feature extractor, event decoder, postprocessor, and visualizer. The system reads 0.1-second frames from each EEG channel and sends them to the feature extractor and the visualizer. The feature extractor generates LFCC features in real time from the streaming EEG signal. Next, the system computes seizure and background probabilities using a channel-based LSTM model and applies a postprocessor to aggregate the detected events across channels. The system then displays the EEG signal and the decisions simultaneously using a visualization module. The online system uses C++, Python, TensorFlow, and PyQtGraph in its implementation. The online system accepts streamed EEG data sampled at 250 Hz as input. The system begins processing the EEG signal by applying a TCP montage [8]. Depending on the type of the montage, the EEG signal can have either 22 or 20 channels. To enable the online operation, we send 0.1-second (25 samples) length frames from each channel of the streamed EEG signal to the feature extractor and the visualizer. Feature extraction is performed sequentially on each channel. The signal preprocessor writes the sample frames into two streams to facilitate these modules. In the first stream, the feature extractor receives the signals using stdin. In parallel, as a second stream, the visualizer shares a user-defined file with the signal preprocessor. This user-defined file holds raw signal information as a buffer for the visualizer. The signal preprocessor writes into the file while the visualizer reads from it. Reading and writing into the same file poses a challenge. The visualizer can start reading while the signal preprocessor is writing into it. To resolve this issue, we utilize a file locking mechanism in the signal preprocessor and visualizer. Each of the processes temporarily locks the file, performs its operation, releases the lock, and tries to obtain the lock after a waiting period. The file locking mechanism ensures that only one process can access the file by prohibiting other processes from reading or writing while one process is modifying the file [9]. The feature extractor uses circular buffers to save 0.3 seconds or 75 samples from each channel for extracting 0.2-second or 50-sample long center-aligned windows. The module generates 8 absolute LFCC features where the zeroth cepstral coefficient is replaced by a temporal domain energy term. For extracting the rest of the features, three pipelines are used. The differential energy feature is calculated in a 0.9-second absolute feature window with a frame size of 0.1 seconds. The difference between the maximum and minimum temporal energy terms is calculated in this range. Then, the first derivative or the delta features are calculated using another 0.9-second window. Finally, the second derivative or delta-delta features are calculated using a 0.3-second window [6]. The differential energy for the delta-delta features is not included. In total, we extract 26 features from the raw sample windows which add 1.1 seconds of delay to the system. We used the Temple University Hospital Seizure Database (TUSZ) v1.2.1 for developing the online system [10]. The statistics for this dataset are shown in Table 1. A channel-based LSTM model was trained using the features derived from the train set using the online feature extractor module. A window-based normalization technique was applied to those features. In the offline model, we scale features by normalizing using the maximum absolute value of a channel [11] before applying a sliding window approach. Since the online system has access to a limited amount of data, we normalize based on the observed window. The model uses the feature vectors with a frame size of 1 second and a window size of 7 seconds. We evaluated the model using the offline P1 postprocessor to determine the efficacy of the delayed features and the window-based normalization technique. As shown by the results of experiments 1 and 4 in Table 2, these changes give us a comparable performance to the offline model. The online event decoder module utilizes this trained model for computing probabilities for the seizure and background classes. These posteriors are then postprocessed to remove spurious detections. The online postprocessor receives and saves 8 seconds of class posteriors in a buffer for further processing. It applies multiple heuristic filters (e.g., probability threshold) to make an overall decision by combining events across the channels. These filters evaluate the average confidence, the duration of a seizure, and the channels where the seizures were observed. The postprocessor delivers the label and confidence to the visualizer. The visualizer starts to display the signal as soon as it gets access to the signal file, as shown in Figure 1 using the “Signal File” and “Visualizer” blocks. Once the visualizer receives the label and confidence for the latest epoch from the postprocessor, it overlays the decision and color codes that epoch. The visualizer uses red for seizure with the label SEIZ and green for the background class with the label BCKG. Once the streaming finishes, the system saves three files: a signal file in which the sample frames are saved in the order they were streamed, a time segmented event (TSE) file with the overall decisions and confidences, and a hypotheses (HYP) file that saves the label and confidence for each epoch. The user can plot the signal and decisions using the signal and HYP files with only the visualizer by enabling appropriate options. For comparing the performance of different stages of development, we used the test set of TUSZ v1.2.1 database. It contains 1015 EEG records of varying duration. The any-overlap performance [12] of the overall system shown in Figure 2 is 40.29% sensitivity with 5.77 FAs per 24 hours. For comparison, the previous state-of-the-art model developed on this database performed at 30.71% sensitivity with 6.77 FAs per 24 hours [3]. The individual performances of the deep learning phases are as follows: Phase 1’s (P1) performance is 39.46% sensitivity and 11.62 FAs per 24 hours, and Phase 2 detects seizures with 41.16% sensitivity and 11.69 FAs per 24 hours. We trained an LSTM model with the delayed features and the window-based normalization technique for developing the online system. Using the offline decoder and postprocessor, the model performed at 36.23% sensitivity with 9.52 FAs per 24 hours. The trained model was then evaluated with the online modules. The current performance of the overall online system is 45.80% sensitivity with 28.14 FAs per 24 hours. Table 2 summarizes the performances of these systems. The performance of the online system deviates from the offline P1 model because the online postprocessor fails to combine the events as the seizure probability fluctuates during an event. The modules in the online system add a total of 11.1 seconds of delay for processing each second of the data, as shown in Figure 3. In practice, we also count the time for loading the model and starting the visualizer block. When we consider these facts, the system consumes 15 seconds to display the first hypothesis. The system detects seizure onsets with an average latency of 15 seconds. Implementing an automatic seizure detection model in real time is not trivial. We used a variety of techniques such as the file locking mechanism, multithreading, circular buffers, real-time event decoding, and signal-decision plotting to realize the system. A video demonstrating the system is available at: https://www.isip.piconepress.com/projects/nsf_pfi_tt/resources/videos/realtime_eeg_analysis/v2.5.1/video_2.5.1.mp4. The final conference submission will include a more detailed analysis of the online performance of each module. ACKNOWLEDGMENTS Research reported in this publication was most recently supported by the National Science Foundation Partnership for Innovation award number IIP-1827565 and the Pennsylvania Commonwealth Universal Research Enhancement Program (PA CURE). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations. REFERENCES [1] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: a review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. https://doi.org/10.1088/1741-2552/ab0ab5. [2] A. C. Bridi, T. Q. Louro, and R. C. L. Da Silva, “Clinical Alarms in intensive care: implications of alarm fatigue for the safety of patients,” Rev. Lat. Am. Enfermagem, vol. 22, no. 6, p. 1034, 2014. https://doi.org/10.1590/0104-1169.3488.2513. [3] M. Golmohammadi, V. Shah, I. Obeid, and J. Picone, “Deep Learning Approaches for Automatic Seizure Detection from Scalp Electroencephalograms,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York, New York, USA: Springer, 2020, pp. 233–274. https://doi.org/10.1007/978-3-030-36844-9_8. [4] “CFM Olympic Brainz Monitor.” [Online]. Available: https://newborncare.natus.com/products-services/newborn-care-products/newborn-brain-injury/cfm-olympic-brainz-monitor. [Accessed: 17-Jul-2020]. [5] M. L. Scheuer, S. B. Wilson, A. Antony, G. Ghearing, A. Urban, and A. I. Bagic, “Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset,” J. Clin. Neurophysiol., 2020. https://doi.org/10.1097/WNP.0000000000000709. [6] A. Harati, M. Golmohammadi, S. Lopez, I. Obeid, and J. Picone, “Improved EEG Event Classification Using Differential Energy,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium, 2015, pp. 1–4. https://doi.org/10.1109/SPMB.2015.7405421. [7] V. Shah, C. Campbell, I. Obeid, and J. Picone, “Improved Spatio-Temporal Modeling in Automated Seizure Detection using Channel-Dependent Posteriors,” Neurocomputing, 2021. [8] W. Tatum, A. Husain, S. Benbadis, and P. Kaplan, Handbook of EEG Interpretation. New York City, New York, USA: Demos Medical Publishing, 2007. [9] D. P. Bovet and C. Marco, Understanding the Linux Kernel, 3rd ed. O’Reilly Media, Inc., 2005. https://www.oreilly.com/library/view/understanding-the-linux/0596005652/. [10] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Front. Neuroinform., vol. 12, pp. 1–6, 2018. https://doi.org/10.3389/fninf.2018.00083. [11] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011. https://dl.acm.org/doi/10.5555/1953048.2078195. [12] J. Gotman, D. Flanagan, J. Zhang, and B. Rosenblatt, “Automatic seizure detection in the newborn: Methods and initial evaluation,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 356–362, 1997. https://doi.org/10.1016/S0013-4694(97)00003-9. 
    more » « less
  3. d. Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network’s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network’s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society’s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective—which represents the price of the network’s routine functionality—is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demand 
    more » « less
  4. Sankey, Temuulen ; Van Den Broeke, Matthew (Ed.)
    Rapid impact assessment of cyclones on coastal ecosystems is critical for timely rescue and rehabilitation operations in highly human-dominated landscapes. Such assessments should also include damage assessments of vegetation for restoration planning in impacted natural landscapes. Our objective is to develop a remote sensing-based approach combining satellite data derived from optical (Sentinel-2), radar (Sentinel-1), and LiDAR (Global Ecosystem Dynamics Investigation) platforms for rapid assessment of post-cyclone inundation in nonforested areas and vegetation damage in a primarily forested ecosystem. We apply this multi-scalar approach for assessing damages caused by the cyclone Amphan that hit coastal India and Bangladesh in May 2020, severely flooding several districts in the two countries, and causing destruction to the Sundarban mangrove forests. Our analysis shows that at least 6821 sq. km. land across the 39 study districts was inundated even after 10 days after the cyclone. We further calculated the change in forest greenness as the difference in normalized difference vegetation index (NDVI) pre- and post-cyclone. Our findings indicate a <0.2 unit decline in NDVI in 3.45 sq. km. of the forest. Rapid assessment of post-cyclone damage in mangroves is challenging due to limited navigability of waterways, but critical for planning of mitigation and recovery measures. We demonstrate the utility of Otsu method, an automated statistical approach of the Google Earth Engine platform to identify inundated areas within days after a cyclone. Our radar-based inundation analysis advances current practices because it requires minimal user inputs, and is effective in the presence of high cloud cover. Such rapid assessment, when complemented with detailed information on species and vegetation composition, can inform appropriate restoration efforts in severely impacted regions and help decision makers efficiently manage resources for recovery and aid relief. We provide the datasets from this study on an open platform to aid in future research and planning endeavors. 
    more » « less
  5. null (Ed.)
    Nonstructural components within mission-critical facilities such as hospitals and telecommunication facilities are vital to a community's resilience when subjected to a seismic event. Building contents like medical and computer equipment are critical for the response and recovery process following an earthquake. A solution to protecting these systems from seismic hazards is base isolation. Base isolation systems are designed to decouple an entire building structure from destructive ground motions. For other buildings not fitted with base isolation, a practical and economical solution to protect vital building contents from earthquake-induced floor motion is to isolate individual equipment using, for example, rolling-type isolation systems (RISs). RISs are a relatively new innovation for protecting equipment. These systems function as a pendulum-like mechanism to convert horizontal motion into vertical motion. An accompanying change in potential energy creates a restoring force related to the slope of the rolling surface. This study seeks to evaluate the seismic hazard mitigation performance of RISs, as well as propose and test a novel double RIS. A physics-based mathematical model was developed for a single RIS via Lagrange's equation adhering to the kinetic constraint of rolling without slipping. The mathematical model for the single RIS was used to predict the response and characteristics of these systems. A physical model was fabricated with additive manufacturing and tested against multiple earthquakes on a shake table. The system featured a single-degree-of-freedom (SDOF) structure to represent a piece of equipment. The results showed that the RIS effectively reduced accelerations felt by the SDOF compared to a fixed-base SDOF system. The single RIS experienced the most substantial accelerations from the Mendocino record, which contains low-frequency content in the range of the RIS's natural period (1-2 seconds). Earthquakes with these long-period components have the potential to cause impacts within the isolation bearing that would degrade its performance. To accommodate large displacements, a double RIS is proposed. The double RIS has twice the displacement capacity of a single RIS without increasing the size of the bearing components. The mathematical model for the single RIS was extended to the double RIS following a similar procedure. Two approaches were used to evaluate the double RIS's performance: stochastic and deterministic. The stochastic response of the double RIS under stationary white noise excitation was evaluated for relevant system parameters, namely mass ratio and tuning frequency. Both broadband and filtered (Kanai-Tajimi) white noise excitation were considered. The response variances of the double RIS were normalized by a baseline single RIS for a comparative study, from which design parameter maps were drawn. A deterministic analysis was conducted to further evaluate the double RIS in the case of nonstationary excitation. The telecommunication equipment qualification waveform, VERTEQ-II, was used for these numerical simulations. Peak transient responses were compared to the single RIS responses, and optimal design regions were determined. General design guidelines based on the stochastic and deterministic analyses are given. The results aim to provide a framework usable in the preliminary design stage of a double RIS to mitigate seismic responses. 
    more » « less