Maintaining the functionality of wastewater networks is critical to individual well-being, business continuity, public health, and safety. However, seismic damage and loss assessments of wastewater networks traditionally use fragility functions based on median repair rates without considering relevant sources of uncertainty and correlations of damage when estimating potential damage states and pipe repairs. This study presents a probabilistic methodology to incorporate modeling uncertainty (e.g. model parameter and model class uncertainty) and spatial correlations (e.g. spatial auto- and cross-correlation) of pipe repairs. The methodology was applied to a case study backbone system of a wastewater network in Portland, OR, using the expected hazard intensity maps for multiple deterministic earthquake scenarios, including a moment magnitude M6.8 Portland Hills Fault and M8.1, M8.4, M8.7, and M9.0 Cascadia Subduction Zone (CSZ) events. As spatial-correlation models of pipeline damage were non-existent in the literature and local information on costs to repair the pipes was limited at the time of this study, correlation methods and repair costs were proposed to estimate lower and upper bounds of pipe damage and loss. The results show how the consideration of different levels of uncertainty and spatial correlation for pipe repair rate could lead to different probabilistic estimates of damage and loss at the system level of the wastewater network, even though the point estimates, such as the mean and median, remain essentially unaltered.
more »
« less
Co-Optimization of Damage Assessment and Restoration: A Resilience-Driven Dynamic Crew Allocation for Power Distribution Systems
This study introduces a mixed-integer linear programming (MILP) model, effectively co-optimizing patrolling, damage assessment, fault isolation, repair, and load re-energization processes. The model is designed to solve a vital operational conundrum: deciding between further network exploration to obtain more comprehensive data or addressing the repair of already identified faults. As information on the fault location and repair timelines becomes available, the model allows for dynamic adaptation of crew dispatch decisions. In addition, this study proposes a conservative power flow constraint set that considers two network loading scenarios within the final network configuration. This approach results in the determination of an upper and a lower bound for node voltage levels and an upper bound for power line flows. To underscore the practicality and scalability of the proposed model, we have demonstrated its application using IEEE 123-node and 8500-node test systems, where it delivered promising results.
more »
« less
- Award ID(s):
- 2145564
- PAR ID:
- 10649360
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Power Systems
- Volume:
- 40
- Issue:
- 1
- ISSN:
- 0885-8950
- Page Range / eLocation ID:
- 676 to 688
- Subject(s) / Keyword(s):
- Damage assessment, fault management, field crew, resilience, and service restoration
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper, we study fault-tolerant distributed consensus in wireless systems. In more detail, we produce two new randomized algorithms that solve this problem in the abstract MAC layer model, which captures the basic interface and communication guarantees provided by most wireless MAC layers. Our algorithms work for any number of failures, require no advance knowledge of the network participants or network size, and guarantee termination with high probability after a number of broadcasts that are polynomial in the network size. Our first algorithm satisfies the standard agreement property, while our second trades a faster termination guarantee in exchange for a looser agreement property in which most nodes agree on the same value. These are the first known fault-tolerant consensus algorithms for this model. In addition to our main upper bound results, we explore the gap between the abstract MAC layer and the standard asynchronous message passing model by proving fault-tolerant consensus is impossible in the latter in the absence of information regarding the network participants, even if we assume no faults, allow randomized solutions, and provide the algorithm a constant-factor approximation of the network size.more » « less
-
null (Ed.)Automated debugging techniques, including fault localization and program repair, have been studied for over a decade. However, the only existing connection between fault localization and program repair is that fault localization computes the potential buggy elements for program repair to patch. Recently, a pioneering work, ProFL, explored the idea of unified debugging to unify fault localization and program repair in the other direction for the first time to boost both areas. More specifically, ProFL utilizes the patch execution results from one state-of-the-art repair system, PraPR, to help improve state-of-the-art fault localization. In this way, ProFL not only improves fault localization for manual repair, but also extends the application scope of automated repair to all possible bugs (not only the small ratio of bugs that can be automatically fixed). However, ProFL only considers one APR system (i.e., PraPR), and it is not clear how other existing APR systems based on different designs contribute to unified debugging. In this work, we perform an extensive study of the unified-debugging approach on 16 state-of-the-art program repair systems for the first time. Our experimental results on the widely studied Defects4J benchmark suite reveal various practical guidelines for unified debugging, such as (1) nearly all the studied 16 repair systems can positively contribute to unified debugging despite their varying repairing capabilities, (2) repair systems targeting multi-edit patches can bring extraneous noise into unified debugging, (3) repair systems with more executed/plausible patches tend to perform better for unified debugging, and (4) unified debugging effectiveness does not rely on the availability of correct patches in automated repair. Based on our results, we further propose an advanced unified debugging technique, UniDebug++, which can localize over 20% more bugs within Top-1 positions than state-of-the-art unified debugging technique, ProFL.more » « less
-
We propose a structure-preserving model-reduction methodology for large-scale dynamic networks with tightly-connected components. First, the coherent groups are identified by a spectral clustering algorithm on the graph Laplacian matrix that models the network feedback. Then, a reduced network is built, where each node represents the aggregate dynamics of each coherent group, and the reduced network captures the dynamic coupling between the groups. We provide an upper bound on the approximation error when the network graph is randomly generated from a weight stochastic block model. Finally, numerical experiments align with and validate our theoretical findings.more » « less
-
We study smoothed analysis of distributed graph algorithms, focusing on the fundamental minimum spanning tree (MST) problem. With the goal of studying the time complexity of distributed MST as a function of the "perturbation" of the input graph, we posit a smoothing model that is parameterized by a smoothing parameter 0 ≤ ϵ(n) ≤ 1 which controls the amount of random edges that can be added to an input graph G per round. Informally, ϵ(n) is the probability (typically a small function of n, e.g., n--¼) that a random edge can be added to a node per round. The added random edges, once they are added, can be used (only) for communication. We show upper and lower bounds on the time complexity of distributed MST in the above smoothing model. We present a distributed algorithm that, with high probability, 1 computes an MST and runs in Õ(min{1/√ϵ(n)2O(√log n), D+ √n}) rounds2 where ϵ is the smoothing parameter, D is the network diameter and n is the network size. To complement our upper bound, we also show a lower bound of Ω(min{1/√ϵ(n), D + √n}). We note that the upper and lower bounds essentially match except for a multiplicative 2O(√log n) polylog(n) factor. Our work can be considered as a first step in understanding the smoothed complexity of distributed graph algorithms.more » « less
An official website of the United States government

