skip to main content


Title: Design of Risk-Sharing Mechanism Related to Extreme Events
The occurrence of extreme events, either natural or man-made, puts stress on both the physical infrastructure, causing damages and failures, and the financial system. The following recovery process requires a large amount of resources from financial agents, such as insurance companies. If the demand for funds overpasses their capacity, these financial agents cannot fulfill their obligations, thus defaulting, without being able to deliver the requested funds. However, agents can share risk among each other, according to specific agreements. Our goal is to investigate the relationship between these agreements and the overall response of the physical/financial systems to extreme events and to identify the optimal set of agreements, according to some risk-based metrics. We model the system as a directed and weighted graph, where nodes represent financial agents and links agreements among these. Each node faces an external demand of funds coming from the physical assets, modeled as a random variable, that can be transferred to other nodes, via the directed edges. For a given probabilistic model of demands and structure of the graph, we evaluate metrics such as the expected number of defaults, and we identify the graph configuration which optimizes the metric. The identified graph suggests to the agents a set of agreements to minimize global risk.  more » « less
Award ID(s):
1638327
NSF-PAR ID:
10161392
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proc. of the 19th Working Conference of the IFIP Working Group 7.5 on Reliability and Optimization of Structural Systems, ETH Zurich, Zentrum, June 26-29, 2018.
Page Range / eLocation ID:
77-86
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Urban water utilities, facing rising demands and limited supply expansion options, increasingly partner with neighboring utilities to develop and operate shared infrastructure. Inter‐utility agreements can reduce costs via economies of scale and help limit environmental impacts, as substitutes for independent investments in large capital projects. However, unexpected shifts in demand growth or water availability, deviating from projections underpinning cooperative agreements, can introduce both supply and financial risk to utility partners. Risks may also be compounded by asymmetric growth in demand across partners or inflexibility of the agreement structure itself to adapt to changing conditions of supply and demand. This work explores the viability of both fixed and adjustable capacity inter‐utility cooperative agreements to mitigate regional water supply and financial risk for utilities that vary in size, growth expectations, and independent infrastructure expansion options. Agreements formalized for a shared regional water treatment plant are found to significantly improve regional supply reliability and financial outcomes, despite highly correlated weather and climate across neighboring supply systems (e.g., concurrent drought events). Regional improvements in performance, however, mask tradeoffs among individual agreement partners. Adjustable treatment capacity allocations add flexibility to inter‐utility agreements but can compound financial risk to each utility as a function of the decision‐making of the other partners. Often the sensitivity to partners' decision‐making under an adjustable agreement degrades financial performance, relative to agreements with fixed capacities allocated to each partner. Our results demonstrate the significant benefits cooperative agreements offer, providing a template to aid decision‐makers in the development of water supply partnerships.

     
    more » « less
  2. d. Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network’s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network’s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society’s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective—which represents the price of the network’s routine functionality—is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demand 
    more » « less
  3. Resilience of urban communities hit by extreme events relies on the prompt access to financial resources needed for recovery. Therefore, the functioning of physical infrastructures is strongly related to that of the financial system, where agents operate in the markets of insurance contracts. When the financial capacity of an agent is lower than the requests for funds from the communities, it defaults and fails at providing these requests, slowing down the recovery process. In this work, we investigate how the resilience of urban communities depends on the reliability of the financial agents operating in the insurance markets, and how to optimize the mechanism adopted by these agents to share the requests for funds from the policyholders. We present results for a set of loss functions that reflect the costs borne by society due to the default of the financial agents. 
    more » « less
  4. We consider the problem of distributed corruption detection in networks. In this model each node of a directed graph is either truthful or corrupt. Each node reports the type (truthful or corrupt) of each of its outneighbors. If it is truthful, it reports the truth, whereas if it is corrupt, it reports adversarially. This model, first considered by Preparata, Metze and Chien in 1967, motivated by the desire to identify the faulty components of a digital system by having the other components checking them, became known as the PMC model. The main known results for this model characterize networks in which all corrupt (that is, faulty) nodes can be identified, when there is a known upper bound on their number. We are interested in networks in which a large fraction of the nodes can be classified. It is known that in the PMC model, in order to identify all corrupt nodes when their number is t, all in-degrees have to be at least t. In contrast, we show that in d regular-graphs with strong expansion properties, a 1 - O(1/d) fraction of the corrupt nodes, and a 1 - O(1/d) fraction of the truthful nodes can be identified, whenever there is a majority of truthful nodes. We also observe that if the graph is very far from being a good expander, namely, if the deletion of a small set of nodes splits the graph into small components, then no corruption detection is possible even if most of the nodes are truthful. Finally we discuss the algorithmic aspects and the computational hardness of the problem. 
    more » « less
  5. Abstract

    Graph generative models have recently emerged as an interesting approach to construct molecular structures atom‐by‐atom or fragment‐by‐fragment. In this study, we adopt the fragment‐based strategy and decompose each input molecule into a set of small chemical fragments. In drug discovery, a few drug molecules are designed by replacing certain chemical substituents with their bioisosteres or alternative chemical moieties. This inspires us to group decomposed fragments into different fragment clusters according to their local structural environment around bond‐breaking positions. In this way, an input structure can be transformed into an equivalent three‐layer graph, in which individual atoms, decomposed fragments, or obtained fragment clusters act as graph nodes at each corresponding layer. We further implement a prototype model, named multi‐resolution graph variational autoencoder (MRGVAE), to learn embeddings of constituted nodes at each layer in a fine‐to‐coarse order. Our decoder adopts a similar but conversely hierarchical structure. It first predicts the next possible fragment cluster, then samples an exact fragment structure out of the determined fragment cluster, and sequentially attaches it to the preceding chemical moiety. Our proposed approach demonstrates comparatively good performance in molecular evaluation metrics compared with several other graph‐based molecular generative models. The introduction of the additional fragment cluster graph layer will hopefully increase the odds of assembling new chemical moieties absent in the original training set and enhance their structural diversity. We hope that our prototyping work will inspire more creative research to explore the possibility of incorporating different kinds of chemical domain knowledge into a similar multi‐resolution neural network architecture.

     
    more » « less