Resilience of urban communities hit by extreme events relies on the prompt access to financial resources needed for recovery. Therefore, the functioning of physical infrastructures is strongly related to that of the financial system, where agents operate in the markets of insurance contracts. When the financial capacity of an agent is lower than the requests for funds from the communities, it defaults and fails at providing these requests, slowing down the recovery process. In this work, we investigate how the resilience of urban communities depends on the reliability of the financial agents operating in the insurance markets, and how to optimize the mechanism adopted by these agents to share the requests for funds from the policyholders. We present results for a set of loss functions that reflect the costs borne by society due to the default of the financial agents.
more »
« less
Design of Risk-Sharing Mechanism Related to Extreme Events
The occurrence of extreme events, either natural or man-made, puts stress on both the physical infrastructure, causing damages and failures, and the financial system. The following recovery process requires a large amount of resources from financial agents, such as insurance companies. If the demand for funds overpasses their capacity, these financial agents cannot fulfill their obligations, thus defaulting, without being able to deliver the requested funds. However, agents can share risk among each other, according to specific agreements. Our goal is to investigate the relationship between these agreements and the overall response of the physical/financial systems to extreme events and to identify the optimal set of agreements, according to some risk-based metrics. We model the system as a directed and weighted graph, where nodes represent financial agents and links agreements among these. Each node faces an external demand of funds coming from the physical assets, modeled as a random variable, that can be transferred to other nodes, via the directed edges. For a given probabilistic model of demands and structure of the graph, we evaluate metrics such as the expected number of defaults, and we identify the graph configuration which optimizes the metric. The identified graph suggests to the agents a set of agreements to minimize global risk.
more »
« less
- Award ID(s):
- 1638327
- PAR ID:
- 10161392
- Date Published:
- Journal Name:
- Proc. of the 19th Working Conference of the IFIP Working Group 7.5 on Reliability and Optimization of Structural Systems, ETH Zurich, Zentrum, June 26-29, 2018.
- Page Range / eLocation ID:
- 77-86
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the problem of distributed corruption detection in networks. In this model each node of a directed graph is either truthful or corrupt. Each node reports the type (truthful or corrupt) of each of its outneighbors. If it is truthful, it reports the truth, whereas if it is corrupt, it reports adversarially. This model, first considered by Preparata, Metze and Chien in 1967, motivated by the desire to identify the faulty components of a digital system by having the other components checking them, became known as the PMC model. The main known results for this model characterize networks in which all corrupt (that is, faulty) nodes can be identified, when there is a known upper bound on their number. We are interested in networks in which a large fraction of the nodes can be classified. It is known that in the PMC model, in order to identify all corrupt nodes when their number is t, all in-degrees have to be at least t. In contrast, we show that in d regular-graphs with strong expansion properties, a 1 - O(1/d) fraction of the corrupt nodes, and a 1 - O(1/d) fraction of the truthful nodes can be identified, whenever there is a majority of truthful nodes. We also observe that if the graph is very far from being a good expander, namely, if the deletion of a small set of nodes splits the graph into small components, then no corruption detection is possible even if most of the nodes are truthful. Finally we discuss the algorithmic aspects and the computational hardness of the problem.more » « less
-
d. Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network’s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network’s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society’s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective—which represents the price of the network’s routine functionality—is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demandmore » « less
-
We introduce threshold growth in the classical threshold contagion model, or equivalently a network of Cramér-Lundberg processes in which nodes have downward jumps when there is a failure of a neighboring node. Choosing the configuration model as underlying graph, we prove fluid limits for the baseline model, as well as extensions to the directed case, state-dependent interarrival times and the case of growth driven by upward jumps. We obtain explicit ruin probabilities for the nodes according to their characteristics: initial threshold and in- (and out-) degree. We then allow nodes to choose their connectivity by trading off link benefits and contagion risk. We define a rational equilibrium concept in which nodes choose their connectivity according to an expected failure probability of any given link and then impose condition that the expected failure probability coincides with the actual failure probability under the optimal connectivity. We show existence of an asymptotic equilibrium and convergence of the sequence of equilibria on the finite networks. In particular, our results show that systems with higher overall growth may have higher failure probability in equilibrium.more » « less
-
Berenbrink, Petra and (Ed.)A directed acyclic graph G = (V,E) is said to be (e,d)-depth robust if for every subset S ⊆ V of |S| ≤ e nodes the graph G-S still contains a directed path of length d. If the graph is (e,d)-depth-robust for any e,d such that e+d ≤ (1-ε)|V| then the graph is said to be ε-extreme depth-robust. In the field of cryptography, (extremely) depth-robust graphs with low indegree have found numerous applications including the design of side-channel resistant Memory-Hard Functions, Proofs of Space and Replication and in the design of Computationally Relaxed Locally Correctable Codes. In these applications, it is desirable to ensure the graphs are locally navigable, i.e., there is an efficient algorithm GetParents running in time polylog|V| which takes as input a node v ∈ V and returns the set of v’s parents. We give the first explicit construction of locally navigable ε-extreme depth-robust graphs with indegree O(log |V|). Previous constructions of ε-extreme depth-robust graphs either had indegree ω̃(log² |V|) or were not explicit.more » « less
An official website of the United States government

