skip to main content


Title: Importance measures for inspections in binary networks
Many infrastructure systems can be modeled as networks of components with binary states (intact, damaged). Information about components’ conditions is crucial for the maintenance process of the system. However, it is often impossible to collect information of all components due to budget constraints. Several metrics have been developed to assess the importance of the components in relation to maintenance actions: an important component is one that should receive high maintenance priority. Instead, in this paper we focus on the priority to be assigned for component inspections and information collection. We investigate metrics based on system level (global) and component level (local) decision making after inspection for networks with different topology, and compare these results with traditional ones. We then discuss the computational challenges of these metrics and provide possible approximation approaches.  more » « less
Award ID(s):
1653716
NSF-PAR ID:
10162531
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ICASP13 - 13th International Conference on Applications of Statistics and Probability in Civil Engineering
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We develop computable metrics to assign priorities for information collection on binary systems composed of binary components. Components are worth inspecting because their condition states are uncertain, and system functioning depends on them. The Value of Information (VoI) enables assessment of the impact of information in decision making under uncertainty, including the component’s reliability and role in the system, the precision of the observation, the available maintenance actions and the expected economic loss. We introduce the VoI-based metrics for system-level (“global”) and component-level (“local”) maintenance actions, analyze the properties of these metrics, and apply them to series and parallel systems. We discuss their computational complexity in applications to general network systems and, to tame the complexity for the local metric assessment, we present a heuristic and assess its performance on some case studies. 
    more » « less
  2. null (Ed.)
    Optimal exploration of engineering systems can be guided by the principle of Value of Information (VoI), which accounts for the topological important of components, their reliability and the management costs. For series systems, in most cases higher inspection priority should be given to unreliable components. For redundant systems such as parallel systems, analysis of one-shot decision problems shows that higher inspection priority should be given to more reliable components. This paper investigates the optimal exploration of redundant systems in long-term decision making with sequential inspection and repairing. When the expected, cumulated, discounted cost is considered, it may become more efficient to give higher inspection priority to less reliable components, in order to preserve system redundancy. To investigate this problem, we develop a Partially Observable Markov Decision Process (POMDP) framework for sequential inspection and maintenance of redundant systems, where the VoI analysis is embedded in the optimal selection of exploratory actions. We investigate the use of alternative approximate POMDP solvers for parallel and more general systems, compare their computation complexities and performance, and show how the inspection priorities depend on the economic discount factor, the degradation rate, the inspection precision, and the repair cost. 
    more » « less
  3. One of the most significant challenges in the field of software code auditing is the presence of vulnerabilities in software source code. Every year, more and more software flaws are discovered, either internally in proprietary code or publicly disclosed. These flaws are highly likely to be exploited and can lead to system compromise, data leakage, or denial of service. To create a large-scale machine learning system for function-level vulnerability identification, we utilized a sizable dataset of C and C++ open-source code containing millions of functions with potential buffer overflow exploits. We have developed an efficient and scalable vulnerability detection method based on neural network models that learn features extracted from the source codes. The source code is first converted into an intermediate representation to remove unnecessary components and shorten dependencies. We maintain the semantic and syntactic information using state-ofthe- art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we have proposed a neural network model that can overcome issues associated with traditional neural networks. We have used evaluation metrics such as F1 score, precision, recall, accuracy, and total execution time to measure the performance. We have conducted a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We have found that all neural network models provide higher accuracy when we use semantic and syntactic information as features. However, this approach requires more execution time due to the added complexity of the word embedding algorithm. Moreover, our proposed model provides higher accuracy than LSTM, BiLSTM, LSTM-Autoencoder, word2vec and BERT models, and the same accuracy as the GPT-2 model with greater efficiency. 
    more » « less
  4. Abstract Aim

    Conservation planning and prioritization generally have focused on protecting taxa based on assessments of their long‐term persistence or on protecting habitats and sites with high species richness. An implicit assumption of these approaches is that species are equally different from each other. We propose metrics for conservation planning and prioritization that include consideration of differences among taxa in their functional characteristics to ensure long‐term maintenance of ecosystem functioning and services.

    Innovation

    We define metrics of functional distinctiveness, irregularity and singularity for a species. Functional distinctiveness is the mean distance in trait space of a species to all other species in a community. Functional irregularity is the variation in the proportional distances of a focal species to all other species based on a Hill function. Functional singularity is the product of those two metrics. These metrics can be weighted based on proportional abundance, biomass or frequency of occurrence. The metrics can be used to prioritize particular species for conservation based on their functional characteristics or to identify functionally distinct priority areas for conservation using the mean functional distinctiveness, irregularity and singularity of a set of species in an area. The metrics can be compared to the species richness of that area, thereby identifying areas that might have low species richness, but whose species are especially functionally distinct, providing important information of conservation relevance.

    Main conclusions

    Applying these metrics to data on the global distributions of parrots, we identified species that are not of current conservation concern because they are geographically widespread, but which might be prioritized due to their functional singularity (e.g., the scarlet macaw). We also identified areas that are species poor and not generally considered noteworthy for their parrot fauna, but that contain a fauna that is functionally singular (e.g., Chile). Together, these metrics broaden the criteria used for conservation prioritization.

     
    more » « less
  5. The issue of synchronization in the power grid is receiving renewed attention, as new energy sources with different dynamics enter the picture. Global metrics have been proposed to evaluate performance, and analyzed under highly simplified assumptions. In this work we extend this approach to more realistic network scenarios, and more closely connect it with metrics used in power engineering practice. In particular, our analysis covers networks with generators of heterogeneous ratings and richer dynamic models of machines. Under a suitable proportionality assumption in the parameters, we show that the step response of bus frequencies can be decomposed in two components. The first component is a system-wide frequency that captures the aggregate grid behavior, and the residual component represents the individual bus frequency deviations from the aggregate. Using this decomposition, we define --and compute in closed form-- several metrics that capture dynamic behaviors that are of relevance for power engineers. In particular, using the system frequency, we define industry-style metrics (Nadir, RoCoF) that are evaluated through a representative machine. We further use the norm of the residual component to define a synchronization cost that can appropriately quantify inter-area oscillations. Finally, we employ robustness analysis tools to evaluate deviations from our proportionality assumption. We show that the system frequency still captures the grid steady-state deviation, and becomes an accurate reduced-order model of the grid as the network connectivity grows. Simulation studies with practically relevant data are included to validate the theory and further illustrate the impact of network structure and parameters on synchronization. Our analysis gives conclusions of practical interest, sometimes challenging the conventional wisdom in the field. 
    more » « less