d. Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network’s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network’s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society’s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, wemore »
Design of Risk-Sharing Mechanism Related to Extreme Events
The occurrence of extreme events, either natural or man-made, puts stress on both the physical infrastructure, causing damages and failures, and the financial system. The following recovery process requires a large amount of resources from financial agents, such as insurance companies. If the demand for funds overpasses their capacity, these financial agents cannot fulfill their obligations, thus defaulting, without being able to deliver the requested funds. However, agents can share risk among each other, according to specific agreements. Our goal is to investigate the relationship between these agreements and the
overall response of the physical/financial systems to extreme events and to identify the optimal set of agreements, according to some risk-based metrics.
We model the system as a directed and weighted graph, where nodes represent financial agents and links agreements among these. Each node faces an external demand of funds coming from the physical assets, modeled as a random variable, that can be transferred to other nodes, via the directed edges. For a given probabilistic model of demands and structure of the graph, we evaluate metrics such as the expected number of defaults, and we identify the graph configuration which optimizes the metric. The identified graph suggests to the agents a set more »
- Award ID(s):
- 1638327
- Publication Date:
- NSF-PAR ID:
- 10161392
- Journal Name:
- Proc. of the 19th Working Conference of the IFIP Working Group 7.5 on Reliability and Optimization of Structural Systems, ETH Zurich, Zentrum, June 26-29, 2018.
- Page Range or eLocation-ID:
- 77-86
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Resilience of urban communities hit by extreme events relies on the prompt access to financial resources needed for recovery. Therefore, the functioning of physical infrastructures is strongly related to that of the financial system, where agents operate in the markets of insurance contracts. When the financial capacity of an agent is lower than the requests for funds from the communities, it defaults and fails at providing these requests, slowing down the recovery process. In this work, we investigate how the resilience of urban communities depends on the reliability of the financial agents operating in the insurance markets, and how to optimize the mechanism adopted by these agents to share the requests for funds from the policyholders. We present results for a set of loss functions that reflect the costs borne by society due to the default of the financial agents.
-
We consider the problem of distributed corruption detection in networks. In this model each node of a directed graph is either truthful or corrupt. Each node reports the type (truthful or corrupt) of each of its outneighbors. If it is truthful, it reports the truth, whereas if it is corrupt, it reports adversarially. This model, first considered by Preparata, Metze and Chien in 1967, motivated by the desire to identify the faulty components of a digital system by having the other components checking them, became known as the PMC model. The main known results for this model characterize networks in which all corrupt (that is, faulty) nodes can be identified, when there is a known upper bound on their number. We are interested in networks in which a large fraction of the nodes can be classified. It is known that in the PMC model, in order to identify all corrupt nodes when their number is t, all in-degrees have to be at least t. In contrast, we show that in d regular-graphs with strong expansion properties, a 1 - O(1/d) fraction of the corrupt nodes, and a 1 - O(1/d) fraction of the truthful nodes can be identified, whenever theremore »
-
Grilli, Jacopo (Ed.)Collective behavior is an emergent property of numerous complex systems, from financial markets to cancer cells to predator-prey ecological systems. Characterizing modes of collective behavior is often done through human observation, training generative models, or other supervised learning techniques. Each of these cases requires knowledge of and a method for characterizing the macro-state(s) of the system. This presents a challenge for studying novel systems where there may be little prior knowledge. Here, we present a new unsupervised method of detecting emergent behavior in complex systems, and discerning between distinct collective behaviors. We require only metrics, d (1) , d (2) , defined on the set of agents, X , which measure agents’ nearness in variables of interest. We apply the method of diffusion maps to the systems ( X , d ( i ) ) to recover efficient embeddings of their interaction networks. Comparing these geometries, we formulate a measure of similarity between two networks, called the map alignment statistic (MAS). A large MAS is evidence that the two networks are codetermined in some fashion, indicating an emergent relationship between the metrics d (1) and d (2) . Additionally, the form of the macro-scale organization is encoded in the covariancesmore »
-
The Twitter-Based Knowledge Graph for Researchers project is an effort to construct a knowledge graph of computation-based tasks and corresponding outputs. It will be utilized by subject matter experts, statisticians, and developers. A knowledge graph is a directed graph of knowledge accumulated from a variety of sources. For our application, Subject Matter Experts (SMEs) are experts in their respective non-computer science fields, but are not necessarily experienced with running heavy computation on datasets. As a result, they find it difficult to generate workflows for their projects involving Twitter data and advanced analysis. Workflow management systems and libraries that facilitate computation are only practical when the users of these systems understand what analysis they need to perform. Our goal is to bridge this gap in understanding. Our queryable knowledge graph will generate a visual workflow for these experts and researchers to achieve their project goals. After meeting with our client, we established two primary deliverables. First, we needed to create an ontology of all Twitter-related information that an SME might want to answer. Secondly, we needed to build a knowledge graph based on this ontology and produce a set of APIs to trigger a set of network algorithms based on themore »