Given the vital rule of data center availability and since the inlet temperature of the IT equipment increase rapidly until reaching a certain threshold value after which IT starts throttling or shut down because of overheat during cooling system failure. Hence, it is especially important to understand failures and their effects. This study presented experimental investigation and analysis of a facility-level cooling system failure scenario in which chilled water interruption introduced to the data center. Quantitative instrumentation tools including wireless technology such as wireless temperature and pressure sensors were used to measure the discrete air inlet temperature and pressure differential though cold aisle enclosure, respectively. In addition, Intelligent Platform Management Interface (IPMI) and cooling system data during failure/recovery were reported. Furthermore, the IT equipment performance and response for opened and contained environments were simulated and compared. Finally, an experiment based analysis of the Ride Through Time (RTT) of servers during chilled water interruption of the cooling infrastructure presented as well. The results showed that for all three classes of servers tested during the cooling failure, CAC helped keep the server’s cooler for longer. The containment provided a barrier between the hot and cold air streams and caused slight negative pressuremore »
Measuring Infrastructure and Community Recovery Rate Using Bayesian Methods: A Case Study of Power Systems Resilience
With the increasing frequency and severity of disasters resulting especially from natural hazards and impacting both infrastructure systems and communities, thus challenging their timely recovery, there is a strong need to prepare for more effective response and recovery. Communities have especially struggled to understand the aspects of recovery patterns for different systems and prepare accordingly. Therefore, it is essential to develop models that are able to measure and estimate the recovery trajectory for a certain community or infrastructure network given system characteristics and event information. The objective of the study is to deploy the Poisson Bayesian kernel model developed and tested in earlier work in risk analysis to measure the recovery rate of a system. In this paper, the model is implemented and tested on a resilience modeling case study of power systems. The model is validated using a comparison to other count data models such as Poisson generalized linear model and the negative binomial generalized linear model.
- Award ID(s):
- 1635717
- Publication Date:
- NSF-PAR ID:
- 10072967
- Journal Name:
- Annual European Safety and Reliability Conference
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We develop a projected Nesterov’s proximal-gradient (PNPG) approach for sparse signal reconstruction that combines adaptive step size with Nesterov’s momentum acceleration. The objective function that we wish to minimize is the sum of a convex differentiable data-fidelity (negative log-likelihood (NLL)) term and a convex regularization term. We apply sparse signal regularization where the signal belongs to a closed convex set within the closure of the domain of the NLL; the convex-set constraint facilitates flexible NLL domains and accurate signal recovery. Signal sparsity is imposed using the ℓ₁-norm penalty on the signal’s linear transform coefficients. The PNPG approach employs a projected Nesterov’s acceleration step with restart and a duality-based inner iteration to compute the proximal mapping. We propose an adaptive step-size selection scheme to obtain a good local majorizing function of the NLL and reduce the time spent backtracking. Thanks to step-size adaptation, PNPG converges faster than the methods that do not adjust to the local curvature of the NLL. We present an integrated derivation of the momentum acceleration and proofs of O(k⁻²) objective function convergence rate and convergence of the iterates, which account for adaptive step size, inexactness of the iterative proximal mapping, and the convex-set constraint. The tuning ofmore »
-
Quantifying the resilience of ecological communities to increasingly frequent and severe environmental disturbance, such as natural disasters, requires long-term and continuous observations and a research community that is itself resilient. Investigators must have reliable access to data, a variety of resources to facilitate response to perturbation, and mechanisms for rapid and efficient return to function and/or adaptation to post-disaster conditions. There are always challenges to meeting these requirements, which may be compounded by multiple, co-occurring incidents. For example, travel restrictions resulting from the COVID-19 pandemic hindered preparations for, and responses to, environmental disasters that are the hallmarks of resilient research communities. During its initial years of data collection, a diversity of disturbances—earthquakes, wildfires, droughts, hurricanes and floods—have impacted sites at which the National Ecological Observatory Network (NEON) intends to measure organisms and environment for at least 30 years. These events strain both the natural and human communities associated with the Observatory, and additional stressors like public health crises only add to the burden. Here, we provide a case-study of how NEON has demonstrated not only internal resilience in the face of the public health crisis of COVID-19, but has also enhanced the resilience of ecological research communities associated with the networkmore »
-
Flooding caused by tropical cyclones, tsunami, and many other phenomena is one type of natural disaster that occurs all around the world. While these disasters cannot be prevented, the communities can be made more resilient and damages caused by them to lives and infrastructure can be minimized by developing early warning systems. Microwave-based systems provide a non-contact measurement setup to monitor water level, thus requiring low maintenance and operation costs. In this paper, a DC-coupled 5.8-GHz interferometry radar was designed and tested by observing water level in a barrel, which had water poured in and drained out over a long-time period. By adding more gains to the RF chain and removing the gain in the baseband, the proposed water-level monitoring radar system eliminates the requirement of complex DC tuning structure in the previous works. The experiment demonstrated that the proposed water-level monitoring radar system was able to accurately measure the relative position of water with mm-accuracy.
-
Since its commissioning in 2004, the UC San Diego Large High-Performance Outdoor Shake Table (LHPOST) has enabled the seismic testing of large structural, geostructural and soil-foundation-structural systems, with its ability to accurately reproduce far- and near-field ground motions. Thirty-four (34) landmark projects were conducted on the LHPOST as a national shared-use equipment facility part of the National Science Foundation (NSF) Network for Earthquake Engineering Simulation (NEES) and currently Natural Hazards Engineering Research Infrastructure (NHERI) programs, and an ISO/IEC Standard 17025:2005 accredited facility. The tallest structures ever tested on a shake table were conducted on the LHPOST, free from height restrictions. Experiments using the LHPOST generate essential knowledge that has greatly advanced seismic design practice and response predictive capabilities for structural, geostructural, and non-structural systems, leading to improved earthquake safety in the community overall. Indeed, the ability to test full-size structures has made it possible to physically validate the seismic performance of various systems that previously could only be studied at reduced scale or with computer models. However, the LHPOST's limitation of 1-DOF (uni-directional) input motion prevented the investigation of important aspects of the seismic response of 3-D structural systems. The LHPOST was originally conceived as a six degrees-of-freedom (6-DOF) shakemore »