<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>The sensitivity of electric power infrastructure resilience to the spatial distribution of disaster impacts.</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>2020 January</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10129152</idno>
					<idno type="doi"></idno>
					<title level='j'>Reliability engineering  systems safety</title>
<idno>0951-8320</idno>
<biblScope unit="volume">193</biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Benjamin Rachunok</author><author>Roshanak Nateghi</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[Credibly assessing the resilience of energy infrastructure in the face of natural disasters is a salient concern facing researchers, government officials, and community members. Here, we explore the influence of the spatial distribution of disruptions due to hurricanes and other natural hazards on the resilience of power distribution systems. We find that incorporating information about the spatial distribution of disaster impacts has significant implications for estimating infrastructure resilience. Specifically, the uncertainty associated with estimated infrastructure resilience metrics to spatially distributed disaster-induced disruptions is much higher than determined by previous methods. We present a case study of an electric power distribution grid impacted by a major landfalling hurricane. We show that improved characterizations of disaster disruption drastically change the way in which the grid recovers, including changes in emergent system properties such as antifragility. Our work demonstrates that previous methods for estimating critical infrastructure resilience may be overstating the confidence associated with estimated network recoveries due to the lack of consideration of the spatial structure of disruptions.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Defined broadly, resilience is an emergent property of a system which manifests as the result of an iterative process of sensing, anticipation, learning, and adaptation to all types of disruptions <ref type="bibr">[1]</ref>. Using this definition, resilience must be studied at a system-wide level, where the resilience of an entire system is studied in the context of hazards and disruptions. Characterization of the resilience of a complex system, therefore, is inherently a comprehensive analysis of that which acts against it. This system-disruption paradigm allows for the study of a wide range of interaction-based entities from ecological plant-pollinator relationships <ref type="bibr">[2,</ref><ref type="bibr">3]</ref> to the psychological resilience of families to trauma <ref type="bibr">[4]</ref>.</p><p>In the context of engineering urban systems, the resilience of a critical infrastructure (e.g., the electric power grid, telecommunication networks, natural gas, water network, etc.,) includes study of the recovery from failures induced by hydro-climatic extremes and seismic events as well as acts of terrorism. Critical urban networked infrastructure is well-represented by a graph <ref type="bibr">[5]</ref>. Subsequently, disrupting a graph requires removing or disabling fractions of the system consistent with an exogenous threat or hazard.</p><p>In this paper, we use a graph-theoretic approach to show that small changes in the spatial characteristics of a disruption to a system radically change the characteristics of system performance as a disruption is repaired over time. Whether the recovery is measured in-terms of network-based performance metrics or by the extent of impact on stakeholders, our results indicate that the measured resilience of a system is heavily dependant on the spatial characteristics of the initial disruption.</p><p>We conduct this study in the case of an electric power distribution grid impacted by a major landfalling hurricane. We generate different spatial distributions of initial disruptions to a power grid and study their impact on graph-theoretic measures of network connectivity as well as the number of customers without power. The remainder of this paper is as follows: Section 2 introduces relevant other works, Section 3 outlines the data and methods used for this analysis, and finally Sections 4 and 5 detail the results and conclusion respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>Network analysis deals with the study of graphs or networks. Networks are "a collection of points [referred to as vertices or nodes] joined together by pairs of lines [referred to as edges or links]." <ref type="bibr">[5]</ref> The edge-vertex pairing lends itself to be an intuitive mathematical object for which to model phenomenon such as animal and plant interactions <ref type="url">https://doi.org/10.1016/j.ress.2019.106658</ref>  <ref type="bibr">[6]</ref>, academic authorship, urban infrastructure design <ref type="bibr">[7]</ref>  <ref type="bibr">[8]</ref> and-most relevant to this work-electric power infrastructure <ref type="bibr">[9]</ref><ref type="bibr">[10]</ref><ref type="bibr">[11]</ref><ref type="bibr">[12]</ref>. Representing a system as a network allows for simple-and in most cases tractable-estimations of system performance. Measurements of the overall size, degree of connectivity, length of paths between vertices, and degree of clustering are all easily computed from a network model and can provide a myriad of insights about the system being represented <ref type="bibr">[13]</ref>. Graphs representing a system in which the components interact can be used to model how the failure of one vertex may propagate through the network <ref type="bibr">[14]</ref>. If failure likelihoods are drawn from certain probability distributions, there can exist critical fractions of node failures for which the failure will cascade to the entire network. This holds when multiple networks are coupled together <ref type="bibr">[15]</ref>.</p><p>Network-based approaches have been widely used to model the resilience of infrastructure <ref type="bibr">[7,</ref><ref type="bibr">16,</ref><ref type="bibr">17]</ref>. This is in addition to conceptual frameworks <ref type="bibr">[1,</ref><ref type="bibr">18,</ref><ref type="bibr">19]</ref>, highly detailed hazard simulations <ref type="bibr">[20]</ref><ref type="bibr">[21]</ref><ref type="bibr">[22]</ref><ref type="bibr">[23]</ref>, and statistical and machine learning approaches <ref type="bibr">[24]</ref><ref type="bibr">[25]</ref><ref type="bibr">[26]</ref><ref type="bibr">[27]</ref><ref type="bibr">[28]</ref> . <ref type="foot">1</ref> All of this work contributes greatly toward improving the resilience of infrastructure by advancing theoretical understandings in networks science <ref type="bibr">[17]</ref>, addressing particular infrastructure inefficiencies <ref type="bibr">[30]</ref>, and improving policy decisions <ref type="bibr">[31]</ref>.</p><p>Generalized graph-theoretic resilience analyses commonly model disruptions by assigning a probability of failure to each vertex in the graph <ref type="bibr">[14,</ref><ref type="bibr">15,</ref><ref type="bibr">17]</ref>. The random pattern of outages fits within a probabilistic formalism allowing for a theoretical understanding of network properties, but provides little realism in the spatial pattern of disruptions. Many of the infrastructure systems analyses continue to use random vertex failures as the general form of the disruption <ref type="bibr">[11,</ref><ref type="bibr">32,</ref><ref type="bibr">33]</ref>. Degree targeting is another commonly used technique in which failures are initiated at vertices with the highest degree <ref type="bibr">[10,</ref><ref type="bibr">12,</ref><ref type="bibr">14,</ref><ref type="bibr">34,</ref><ref type="bibr">35]</ref>. This method is representative of a targeted attack in which an agent wishes to remove nodes which connect to a large portion of the network, however, there is no restriction on the spatial distribution of the failures. Similarly, other vertex properties have been used to motivate targeting such as betweenness <ref type="bibr">[10]</ref> or maximum flow <ref type="bibr">[35]</ref>. Localized failures-in which failures are initialized in small connected components-have been previously studied, however with limited scope; focusing primarily on repair strategies <ref type="bibr">[14]</ref>, or to replicate previous incidents <ref type="bibr">[33]</ref>.</p><p>It should be noted that many previous studies consider disruptions to infrastructure which are -in some way-spatially organized either through explicit specification <ref type="bibr">[36]</ref>, fragility curves <ref type="bibr">[37]</ref>, or reliance on historical data <ref type="bibr">[38]</ref>. However, to our knowledge the inclusion of spatially structured and non-spatially structured disruptions is secondary to the development of an optimization <ref type="bibr">[39]</ref><ref type="bibr">[40]</ref><ref type="bibr">[41]</ref> or recovery model, or resilience measurement algorithm <ref type="bibr">[38,</ref><ref type="bibr">42]</ref>. This work is the first to focus on the explicit impact of the spatial distribution of outages, which we perform by using general, network-based modeling paradigms.</p><p>In this work, we isolate the importance of accounting for the spatial distribution of a disruption and show that inducing changes in only the spatial distribution significantly impacts measurements of system performance. Specifically, the goal of the analysis is not so much to propose a particular spatial pattern of disruption over another, but to demonstrate the importance of considering the shape of disruptions in estimating infrastructure recovery. We present the results in a case study of an electric power distribution grid's response to a hurricane. The electric power distribution system has been identified as a critical component of assessing the vulnerability of the electric power grid to severe-weather disruptions such as hurricanes, with approximately 90% of outages occurring at the distribution level <ref type="bibr">[43]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methods</head><p>As previously mentioned, to investigate the sensitivity of infrastructure system performance to the spatial distribution of disruptions, we present the case of an electric power distribution system's recovery after a major landfalling hurricane. Specifically, we focus on the impact of the spatial distribution of hurricane-induced disruptions on the performance of an electric power grid located in the Gulf Coast of the U.S. (Fig. <ref type="figure">1</ref>). <ref type="foot">2</ref> We do this by simulating large-scale disruptions in the distribution grid, mapping the hurricane-induced disruptions to component failures (outages) in a distribution-level power grid and studying the sensitivity of the resilience of the system to the spatial distribution of the disruption. The simulated outages are subsequently repaired over time, replicating the actual recovery of the power grid from the hurricane disruption so as to study the dynamics of the system's recovery.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Electric power network</head><p>The city for which this analysis is being performed provided GIS files including the location of all of the county's power substations. These are used to locate the position of the nodes in the test network. There are 221 substations and 2 power plants in this data. As we were unable to retrieve information on the connections between the substations, nodes are connected using a minimum spanning tree to establish the edges of the graph. A minimum spanning tree represents a radial network, common among electric power distribution systems <ref type="bibr">[35,</ref><ref type="bibr">44]</ref> The resulting graph has 223 vertices and 222 edges.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Disruption generation algorithms</head><p>In this section, we describe the different disruption patterns evaluated in this study. All cases described cause failures in 60% of the vertices, and this failure proportion is kept constant through all trials. This is in accordance with the actual impact of Hurricane Katrina on the electric power distribution network under study. As previous work primarily focuses on analyzing randomized failures, we use random outages as a base for comparison with previous studies. In simulation replication, a different set of vertices is chosen at random such that 60% of the network is inoperable. The random disruptions form a control sample as there is explicitly no spatial association among the initial disruption.</p><p>To evaluate how the spatial characteristics of the disruption impact the network, additional simulation trials are performed using disruptions generated by search trees. Disruptions are generated using both a Breadth-First search (BFS) and a Depth-First search (DFS) tree <ref type="bibr">[45]</ref> as both create spatially constrained patterns of outages while using no intrinsic information about the individual vertices. Details of the algorithms used to generate the disruptions are listed in Algorithms 1 and 2.</p><p>A BFS begins at a random vertex in the network and failures propagate to all neighbors of that vertex before extending to neighbors-ofneighbors. As the size of the failure is pre-specified, the failures continue until the BFS tree is the required size. This provides a method for generating localized clusters of failures. Similarly, a DFS outage pattern begins at a random vertex and progresses away from the root node as far as possible within the network before searching additional rootnode neighbors. The spatial pattern of DFS trees are connected, but far less localized. These are referred to as the BFS and DFS disruption methods for the remainder of the paper.</p><p>The search tree generation methods are computationally cheap, and are built entirely using the spatial structure of the network. The selection of these algorithms are motivated by existing research supporting the existence of tree-shaped outages in distribution systems owing to the hierarchical nature of electric power distribution <ref type="bibr">[43,</ref><ref type="bibr">46]</ref>. Here, we do not validate actual spatial distributions of outages against the BFS and DFS generation methods, but instead use these methods to isolate the significance of different spatial configurations of outages in the network on measurements of system performance. The initial distribution of outages for one simulation replication are seen in Fig. <ref type="figure">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Performance metric calculation</head><p>In order to characterize the networks as they fail and recover, we use two network-based measurements of system performance: network efficiency and largest connected component. We measure the global efficiency of the electric power network as it fails and recovers as one dimension of network performance. Global efficiency is defined as</p><p>where d(i, j) is the distance between vertex pair i and j. Network efficiency as a concept was proposed as a measure of how efficiently a network exchanges information <ref type="bibr">[47]</ref> and has been previously used the context of power system resilience evaluation <ref type="bibr">[11,</ref><ref type="bibr">48]</ref> and used as a proxy for network performance <ref type="bibr">[34,</ref><ref type="bibr">49]</ref>.</p><p>Additionally we measure the size of the largest connected component (LCC). This is defined as the number of vertices in the largest connected subgraph <ref type="bibr">[5]</ref>. A connected subgraph is a subset of the vertices and edges for which a path exists between all pairs of vertices. LCC has previously been used to evaluate topological models <ref type="bibr">[11]</ref> and provides a measure of the connectedness of the network (ie a fully connected network has a maximal LCC because every vertex is included in the largest cluster). LCC and efficiency have both been previously studied as performance measurements for network representations of power systems, and have been validated as system performance measurements when a broad range of vulnerability scenarios are evaluated <ref type="bibr">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Simulation methodology</head><p>The recovery simulation generates initial disruptions via random,  BFS and DFS methods then subsequently repairs vertices in the network. The rate of repair (i.e., repaired vertices per time unit) is derived from the rate of outages seen in the gulf-coast power operator data. This rate is kept constant through all experiments. At every time step, the vertices to be repaired are chosen based on their contribution to the total network efficiency. The number of vertices to be repaired is first fixed based on the time dependent repair rate, then the set of vertices chosen for repair are selected from the subset of inoperable vertices which-if repaired-would maximally improve the network efficiency. Vertices are selected in a greedy fashion such that the selected subset maximally improves the efficiency of the network. The heuristic search is detailed in Algorithm 3.</p><p>Network statistics are recorded at each step and vertices are repaired until the network is fully operational. The simulation procedure is depicted in Fig. <ref type="figure">3</ref>. The process of creating disruptions and repairing is repeated 100 times for each disruption generation method to account for the inherent randomness in the generation of the initial distributions. The analyses were performed on a 16-core Intel Xeon W-2145 processor, each operating at 3.7 GHz with 32GB of ram. Simulation, analysis, and resulting plots were all generated in R version 3.4.4 <ref type="bibr">[50]</ref>. Network statistics were calculated using igraph <ref type="bibr">[51]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Static measures of impact</head><p>We first evaluate the sensitivity of the static measure of performance-i.e., the performance of the system at the moment the disruption occurs-to the spatial distribution of the disruption generated randomly as well as via BFS and DFS algorithms (Fig. <ref type="figure">4</ref>). To provide an equal comparison-and in accordance to real data from Hurricane Katrina-we present results which impact 60% of the network  </p><p>Local-optimal search. Here, GE is the global efficiency of a graph, and F R indicates the removal of vertices R from F.. regardless of the method of outage generation. However, our extensive sensitivity analysis suggests that the results remained consistent when evaluating network failures ranging from 10% to 90%.</p><p>Computed for 100 stochastic disruptions of each type, there is significant evidence that the disruption methods alter the resilience of the system. The mean efficiency of BFS-and DFS-constructed disruptions are 485% and 457% higher than randomly constructed disruptions respectively. Mean values vary significantly at each failure size as seen in Table <ref type="table">1</ref>. Mean LCC increases similarly with BFS disruptions-BFS increase of 595% over random, DFS increase of 494% over random (Table <ref type="table">3</ref>). Results additionally indicate sample variance increases for tree-constructed disruptions in both performance metrics as seen in Tables <ref type="table">1</ref> and<ref type="table">3</ref>. In the case of the mean comparison, the distributions of efficiency and LCC values are compared using Kolmorogov-Smirnov (KS) two-sample tests and all comparisons are found to be statistically significant at a significance level of 0.01. Results of the KS tests are seen in Table <ref type="table">2</ref>.</p><p>The lower efficiency values and LCC of the random disruption method indicate greater disruption in the system. Lower network efficiency is representative of lower comunicability among the network  concomitant with greater static resilience to a disruption. Likewise lower LCC values indicate geographic sparsity among the network's operable vertices. While neither of these performance metrics directly map to the performance of a high-fidelity power-system simulation, they demonstrate the sensitivity of the spatial distribution of a disruption on generalizeable measurements of system performance in a network model. Consequently any claim resulting from a measure of resilience is sensitive to the spatial characteristics of the initial disruption. Likewise, accounting for the spatial distribution of disruptions introduces greater uncertainty into our estimation of the resilience of a system.</p><p>The sensitivity of the resilience to disruption method additionally manifests when measuring the number of customers with restored power. Mapping the geographical location of each of the vertices in our network to their respective census tract allows us to allocate customers to each substation relative to their population density. Using this this approximation, an average of 40.60% of the customers retain power when disrupted randomly, versus 39.21% and 39.47% for BFS and DFS outages respectively. This similarity is expected as the disruptions are constructed to disconnect 60% of the substations in the network, leaving approximately 40% of the network operational. However similar to measurements of efficiency and LCC, the variance among population affected is higher for tree-based disruptions. Table <ref type="table">4</ref> shows the distribution of the number of customers without power after the network is made inoperable. After random outages are induced in the system 33.57%-48.35% of the population's distribution level power remains operational, while after BFS and DFS outages 26.54%-53.77% and 26.94%-48.95% of the population's power remain operational respectively. This represents an 88% increase in the uncertainty of the performance estimates. Providing estimates of uncertainty is critical to decision makers for the accurate characterization of the resilience of a system <ref type="bibr">[52]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Dynamic measures of impact</head><p>We also evaluate the dynamic performance -i.e., time dependant performance metrics-under separate initial disruption methods as the power grid is repaired (Fig. <ref type="figure">5</ref>). The system performance-characterized by efficiency and LCC-is then measured over time as the system recovers. This is done to characterize the dynamic resilience of the grid under each disruption generation method, ceteris paribus.</p><p>Despite holding the recovery process constant, these results show the efficiency of the network differs greatly in overall functional form between random and spatially generated disruptions, indicating the recovery is significantly coupled to the spatial distribution of disruptions. Recovery from a random disruption pattern increases over time, reaching a maximum prior to all nodes being repaired (Fig. <ref type="figure">5e</ref>). This is an indication of the network exhibiting antifragile properties. Antifragility is a property by which a full reconstruction of the network is not optimal with respect to the chosen performance metric <ref type="bibr">[53,</ref><ref type="bibr">54]</ref>. In the context of network-performance measurements of an electric power distribution grid, antifragility indicates that a performance measurement rises above the optimal value prior to the system returning to its original state, as evident by the concave response seen in Fig. 5a <ref type="bibr">[30]</ref>. As antifragility is considered an inherent property of a system <ref type="bibr">[54]</ref>, the lack of antifragility in spatially-constructed outage systems indicates that it is conditional on the choice of outage distribution. Spatiallyconstructed outages generally have a much higher efficiency throughout but follow an entirely different functional form than the recovery from random disruptions. The deviation between mean efficiency is highest at the initial disruption and decreases over time. Similar to the static analysis, the variance is larger in the recovery from spatially characterized outages. Thus, failing to account for the spatial characteristics of the network disruption can drastically change implications drawn from the associated resilience analysis. A key difference is the lack of antifragility in the distribution electric power network with spatially characterized outages.</p><p>The difference between the disruption generation techniques is diminished when comparing the dynamics of the mean LCC rather than mean network efficiency (Fig. <ref type="figure">5 b,</ref><ref type="figure">d,</ref><ref type="figure">f</ref>). Beyond the initial value of the LCC at the time of failure, there is little difference in the functional form</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Summary statistics for the distribution of efficiency for respective failure modes with the percentage of optimal network efficiency listed in parentheses. Failure fraction represents the fraction of the network which was induced as failed in each iteration. Results presented here are for failures in 60% of the network. Complete results are presented in Appendix Tables A. <ref type="bibr">1</ref>     of the recovery of the network. The size of the LCC in the network generally increases at an increasing rate when vertices are repaired in the network, the primary difference being the initial size of the LCC after failures are generated in the network. These estimates of system recovery are therefore dependent on the spatial characteristics of the initial disruption; however, this result is sensitive to the performance metric used to measure recovery.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>A key element of resilience is the ability of a system to respond to and recover from disruptions of unprecedented magnitude or unforeseen cause. By their nature, all disruptions will require recovery. This positions system recovery as a critical measurement in evaluating the multifaceted resilience of infrastructure systems. A holistic understanding of all types of community recovery is imperative for the continued adaptation to unforeseen challenges. However, these holistic understandings must be built upon a foundational knowledge of the interaction of disasters with the built environment. We contribute to the knowledge related to the interaction of the power distribution grid and hurricanes by providing a novel framework for network resilience analysis which is agnostic to the specifics of the system, allowing for general insights about all facets of community recovery. Our framework for considering spatially-constrained disruptions can be applied to any hierarchical network within a community adversely effected by natural hazards. We plan to extend the work presented here by evaluating the impact of spatial distributions of outages on high-fidelity models of infrastructure systems.</p><p>We show that the post-disruption network-performance of the electrical power distribution grid is highly sensitive to the spatial characteristics of disruptions in the system. Consequently, any insights about general grid resilience which fail to account for the spatial characteristics of the hazard significantly misrepresent the impact of natural hazards on distribution-level electric power infrastructure. More specifically, through the repeated simulation of multiple methods of failure and recovery, we show that previous methods of evaluating disaster impact overestimate the certainty associated with the measurements of system recovery. We show via multiple avenues that improved characterizations of disaster impact significantly increase both the magnitude and uncertainty of the initial impact in the system. This difference holds through the duration of the recovery process; and when considering the dynamics of the system we find that emergent system properties such as antifragility are also dependent on the characteristics of the initial disruption. These differences are most striking when contextualized by their impact on the power distribution grid at a customer level. Our estimates indicate that the estimated range of customers with access to electricity varies from 33 to 48% of the county using previous methods, and up to 26-53% when using Fig. <ref type="figure">5</ref>. Performance metrics measured after the disruption over time for each disruption method. In a-f, the bands of uncertainty represent 95% confidence intervals sampled from the empirical density at each point in time. The black line is the mean of the observations. The x-axis is the relative-completeness of the network repair scaled by the total restoration time for each replication. improved outage characterizations, highlighting the need for continued study of both the pattern of impacts due to natural disasters and the vulnerability of the electric power distribution grid. By demonstrating the sensitivity of the spatial distribution of outages on the electric power grid, we hope to encourage consideration of the spatial distribution of disruptions in conducting infrastructure resilience analytics. </p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>See<ref type="bibr">[29]</ref> for a comprehensive list of topics.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>The specific community on the Gulf-Coast is withheld for privacy reasons but represents a mid-sized metropolitan area</p></note>
		</body>
		</text>
</TEI>
