skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An optimization‐based Monte Carlo method for estimating the two‐terminal survival signature of networks with two component classes
Evaluating two‐terminal network reliability is a classical problem with numerous applications. Because this problem is #P‐Complete, practical studies involving large systems commonly resort to approximating or estimating system reliability rather than evaluating it exactly. Researchers have characterized signatures, such as the destruction spectrum and survival signature, which summarize the system's structure and give rise to procedures for evaluating or approximating network reliability. These procedures are advantageous if the signature can be computed efficiently; however, computing the signature is challenging for complex systems. With this motivation, we consider the use of Monte Carlo (MC) simulation to estimate the survival signature of a two‐terminal network in which there are two classes of i.i.d. components. In this setting, we prove that each MC replication to estimate the signature of a multi‐class system entails solving a multi‐objective maximum capacity path problem. For the case of two classes of components, we adapt a Dijkstra's‐like bi‐objective shortest path algorithm from the literature for the purpose of solving the resulting bi‐objective maximum capacity path problem. We perform computational experiments to compare our method's efficiency against intuitive benchmark approaches. Our computational results demonstrate that the bi‐objective optimization approach consistently outperforms the benchmark approaches, thereby enabling a larger number of MC replications and improved accuracy of the reliability estimation. Furthermore, the efficiency gains versus benchmark approaches appear to become more significant as the network increases in size.  more » « less
Award ID(s):
1751191
PAR ID:
10530525
Author(s) / Creator(s):
;
Publisher / Repository:
Wiley
Date Published:
Journal Name:
Naval Research Logistics (NRL)
ISSN:
0894-069X
Subject(s) / Keyword(s):
Survival signature two-terminal reliability network reliability Monte Carlo simulation bi-objective optimization
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Li, J.; Spanos, P. D.; Chen, J.-B.; Peng, Y.-B. (Ed.)
    Quantifying network reliability is a hard problem, proven to be #P-complete [1]. For real-world network planning and decision making, approximations for the network reliability problem are necessary. This study shows that tensor network contraction (TNC) methods can quickly estimate an upper bound of All Terminal Reliability, RelATR(G), by solving a superset of the network reliability problem: the edge cover problem, EC(G). In addition, these tensor contraction methods can exactly solve source-terminal (S-T) reliability for the class of directed acyclic networks, RelS−T (G). The computational complexity of TNC methods is parameterized by treewidth, significantly benefitting from recent advancements in approximate tree decomposition algorithms [2]. This parameterization does not rely on the reliability of the graph, which means these tensor contraction methods can determine reliability faster than Monte Carlo methods on highly reliable networks, while also providing exact answers or guaranteed upper bound estimates. These tensor contraction methods are applied to grid graphs, random cubic graphs, and a selection of 58 power transmission networks [3], demonstrating computational efficiency and effective approximation using EC(G). 
    more » « less
  2. One essential task in practice is to quantify and improve the reliability of an infrastructure network in terms of the connectivity of network components (i.e., all-terminal reliability). However, as the number of edges and nodes in the network increases, computing the all-terminal network reliability using exact algorithms becomes prohibitive. This is extremely burdensome in network designs requiring repeated computations. In this paper, we propose a novel machine learning-based framework for evaluating and improving all-terminal network reliability using Deep Neural Networks (DNNs) and Deep Reinforcement Learning (DRL). With the help of DNNs and Stochastic Variational Inference (SVI), we can effectively compute the all-terminal reliability for different network configurations in DRL. Furthermore, the Bayesian nature of the proposed SVI+DNN model allows for quantifying the estimation uncertainty while enforcing regularization and reducing overfitting. Our numerical experiment and case study show that the proposed framework provides an effective tool for infrastructure network reliability improvement. 
    more » « less
  3. We present a neural network approach for approximating the value function of high- dimensional stochastic control problems. Our training process simultaneously updates our value function estimate and identifies the part of the state space likely to be visited by optimal trajectories. Our approach leverages insights from optimal control theory and the fundamental relation between semi-linear parabolic partial differential equations and forward-backward stochastic differential equations. To focus the sampling on relevant states during neural network training, we use the stochastic Pontryagin maximum principle (PMP) to obtain the optimal controls for the current value function estimate. By design, our approach coincides with the method of characteristics for the non-viscous Hamilton-Jacobi-Bellman equation arising in deterministic control problems. Our training loss consists of a weighted sum of the objective functional of the control problem and penalty terms that enforce the HJB equations along the sampled trajectories. Importantly, training is unsupervised in that it does not require solutions of the control problem. Our numerical experiments highlight our scheme’s ability to identify the relevant parts of the state space and produce meaningful value estimates. Using a two-dimensional model problem, we demonstrate the importance of the stochastic PMP to inform the sampling and compare to a finite element approach. With a nonlinear control affine quadcopter example, we illustrate that our approach can handle complicated dynamics. For a 100-dimensional benchmark problem, we demonstrate that our approach improves accuracy and time-to-solution and, via a modification, we show the wider applicability of our scheme. 
    more » « less
  4. null (Ed.)
    Abstract A novel, exact algorithm is presented to solve the path planning problem that involves finding the shortest collision-free path from a start to a goal point in a two-dimensional environment containing convex and non-convex obstacles. The proposed algorithm, which is called the shortest possible path (SPP) algorithm, constructs a network of lines connecting the vertices of the obstacles and the locations of the start and goal points which is smaller than the network generated by the visibility graph. Then it finds the shortest path from start to goal point within this network. The SPP algorithm generates a safe, smooth and obstacle-free path that has a desired distance from each obstacle. This algorithm is designed for environments that are populated sparsely with convex and nonconvex polygonal obstacles. It has the capability of eliminating some of the polygons that do not play any role in constructing the optimal path. It is proven that the SPP algorithm can find the optimal path in O(nn r2 ) time, where n is the number of vertices of all polygons and n ̓ is the number of vertices that are considered in constructing the path network (n ̓ ≤ n). The performance of the algorithm is evaluated relative to three major classes of algorithms: heuristic, probabilistic, and classic. Different benchmark scenarios are used to evaluate the performance of the algorithm relative to the first two classes of algorithms: GAMOPP (genetic algorithm for multi-objective path planning), a representative heuristic algorithm, as well as RRT (rapidly-exploring random tree) and PRM (probabilistic road map), two well-known probabilistic algorithms. Time complexity is known for classic algorithms, so the presented algorithm is compared analytically. 
    more » « less
  5. In multi-objective search, edges are annotated with cost vectors consisting of multiple cost components. A path dominates another path with the same start and goal vertices iff the component-wise sum of the cost vectors of the edges of the former path is 'less than' the component-wise sum of the cost vectors of the edges of the latter path. The Pareto-optimal solution set is the set of all undominated paths from a given start vertex to a given goal vertex. Its size can be exponential in the size of the graph being searched, which makes multi-objective search time-consuming. In this paper, we therefore study how to find an approximate Pareto-optimal solution set for a user-provided vector of approximation factors. The size of such a solution set can be significantly smaller than the size of the Pareto-optimal solution set, which enables the design of approximate multi-objective search algorithms that are efficient and produce small solution sets. We present such an algorithm in this paper, called A*pex. A*pex builds on PPA*, a state-of-the-art approximate bi-objective search algorithm (where there are only two cost components) but (1) makes PPA* more efficient for bi-objective search and (2) generalizes it to multi-objective search for any number of cost components. We first analyze the correctness of A*pex and then experimentally demonstrate its efficiency advantage over existing approximate algorithms for bi- and tri-objective search. 
    more » « less