skip to main content


Title: Data-driven Agent-based Models for Optimal Evacuation of Large Metropolitan Areas for Improved Disaster Planning
ABSTRACT Evacuation plans are designed to move people to safety in case of a disaster. It mainly consists of two components: routing and scheduling. Joint optimization of these two components with the goal of minimizing total evacuation time is a computationally hard problem, specifically when the problem instance is large. Moreover, often in disaster situations, there is uncertainty regarding the passability of roads throughout the evacuation time period. In this paper, we present a way to model the time-varying risk associated with roads in disaster situations. We also design a heuristic method based on the well known Large Neighborhood Search framework to perform the joint optimization task. We use real-world road network and population data from Harris County in Houston, Texas and apply our heuristic to find evacuation routes and schedules for the area. We show that the proposed method is able to find good solutions within a reasonable amount of time. We also perform agent-based simulations of the evacuation using these solutions to evaluate their quality and efficacy.  more » « less
Award ID(s):
1918656 1916805
NSF-PAR ID:
10403915
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems
Page Range / eLocation ID:
1639-1641
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract. Previous tsunami evacuation simulations have mostly been based on arbitrary assumptions or inputs adapted from non-emergency situations, but a few studies have used empirical behavior data. This study bridges this gap by integrating empirical decision data from surveys on local evacuation expectations and evacuation drills into an agent-based model of evacuation behavior for two Cascadia subduction zone (CSZ) communities that would be inundated within 20–40 min after a CSZ earthquake. The model also considers the impacts of liquefaction and landslides from the earthquake on tsunami evacuation. Furthermore, we integrate the slope-speed component from least-cost distance to build the simulation model that better represents the complex nature of evacuations. The simulation results indicate that milling time and the evacuation participation rate have significant nonlinear impacts on tsunami mortality estimates. When people walk faster than 1 m s−1, evacuation by foot is more effective because it avoids traffic congestion when driving. We also find that evacuation results are more sensitive to walking speed, milling time, evacuation participation, and choosing the closest safe location than to other behavioral variables. Minimum tsunami mortality results from maximizing the evacuation participation rate, minimizing milling time, and choosing the closest safe destination outside of the inundation zone. This study's comparison of the agent-based model and the beat-the-wave (BtW) model finds consistency between the two models' results. By integrating the natural system, built environment, and social system, this interdisciplinary model incorporates substantial aspects of the real world into the multi-hazard agent-based platform. This model provides a unique opportunity for local authorities to prioritize their resources for hazard education, community disaster preparedness, and resilience plans. 
    more » « less
  2. null (Ed.)
    This paper introduces and studies a graph-based variant of the path planning problem arising in hostile environments. We consider a setting where an agent (e.g. a robot) must reach a given destination while avoiding being intercepted by probabilistic entities which exist in the graph with a given probability and move according to a probabilistic motion pattern known a priori. Given a goal vertex and a deadline to reach it, the agent must compute the path to the goal that maximizes its chances of survival. We study the computational complexity of the problem, and present two algorithms for computing high quality solutions in the general case: an exact algorithm based on Mixed-Integer Nonlinear Programming, working well in instances of moderate size, and a pseudo-polynomial time heuristic algorithm allowing to solve large scale problems in reasonable time. We also consider the two limit cases where the agent can survive with probability 0 or 1, and provide specialized algorithms to detect these kinds of situations more efficiently. 
    more » « less
  3. The offline pickup and delivery problem with time windows (PDPTW) is a classical combinatorial optimization problem in the transportation community, which has proven to be very challenging computationally. Due to the complexity of the problem, practical problem instances can be solved only via heuristics, which trade-off solution quality for computational tractability. Among the various heuristics, a common strategy is problem decomposition, that is, the reduction of a large-scale problem into a collection of smaller sub-problems, with spatial and temporal decompositions being two natural approaches. While spatial decomposition has been successful in certain settings, effective temporal decomposition has been challenging due to the difficulty of stitching together the sub-problem solutions across the decomposition boundaries. In this work, we introduce a novel temporal decomposition scheme for solving a class of PDPTWs that have narrow time windows, for which it is able to provide both fast and high-quality solutions. We utilize techniques that have been popularized recently in the context of online dial-a-ride problems along with the general idea of rolling horizon optimization. To the best of our knowledge, this is the first attempt to solve offline PDPTWs using such an approach. To show the performance and scalability of our framework, we use the optimization of paratransit services as a motivating example. Due to the lack of benchmark solvers similar to ours (i.e., temporal decomposition with an online solver), we compare our results with an offline heuristic algorithm using Google OR-Tools. In smaller problem instances (with an average of 129 requests per instance), the baseline approach is as competitive as our framework. However, in larger problem instances (approximately 2,500 requests per instance), our framework is more scalable and can provide good solutions to problem instances of varying degrees of difficulty, while the baseline algorithm often fails to find a feasible solution within comparable compute times. 
    more » « less
  4. Combinatorial optimization problems prevail in engineering and industry. Some are NP-hard and thus become difficult to solve on edge devices due to limited power and computing resources. Quadratic Unconstrained Binary Optimization (QUBO) problem is a valuable emerging model that can formulate numerous combinatorial problems, such as Max-Cut, traveling salesman problems, and graphic coloring. QUBO model also reconciles with two emerging computation models, quantum computing and neuromorphic computing, which can potentially boost the speed and energy efficiency in solving combinatorial problems. In this work, we design a neuromorphic QUBO solver composed of a swarm of spiking neural networks (SNN) that conduct a population-based meta-heuristic search for solutions. The proposed model can achieve about x20 40 speedup on large QUBO problems in terms of time steps compared to a traditional neural network solver. As a codesign, we evaluate the neuromorphic swarm solver on a 40nm 25mW Resistive RAM (RRAM) Compute-in-Memory (CIM) SoC with a 2.25MB RRAM-based accelerator and an embedded Cortex M3 core. The collaborative SNN swarm can fully exploit the specialty of CIM accelerator in matrix and vector multiplications. Compared to previous works, such an algorithm-hardware synergized solver exhibits advantageous speed and energy efficiency for edge devices. 
    more » « less
  5. null (Ed.)
    Accurate, precise, and timely estimation of crop yield is key to a grower’s ability to proactively manage crop growth and predict harvest logistics. Such yield predictions typically are based on multi-parametric models and in-situ sampling. Here we investigate the extension of a greenhouse study, to low-altitude unmanned aerial systems (UAS). Our principal objective was to investigate snap bean crop (Phaseolus vulgaris) yield using imaging spectroscopy (hyperspectral imaging) in the visible to near-infrared (VNIR; 400–1000 nm) region via UAS. We aimed to solve the problem of crop yield modelling by identifying spectral features explaining yield and evaluating the best time period for accurate yield prediction, early in time. We introduced a Python library, named Jostar, for spectral feature selection. Embedded in Jostar, we proposed a new ranking method for selected features that reaches an agreement between multiple optimization models. Moreover, we implemented a well-known denoising algorithm for the spectral data used in this study. This study benefited from two years of remotely sensed data, captured at multiple instances over the summers of 2019 and 2020, with 24 plots and 18 plots, respectively. Two harvest stage models, early and late harvest, were assessed at two different locations in upstate New York, USA. Six varieties of snap bean were quantified using two components of yield, pod weight and seed length. We used two different vegetation detection algorithms. the Red-Edge Normalized Difference Vegetation Index (RENDVI) and Spectral Angle Mapper (SAM), to subset the fields into vegetation vs. non-vegetation pixels. Partial least squares regression (PLSR) was used as the regression model. Among nine different optimization models embedded in Jostar, we selected the Genetic Algorithm (GA), Ant Colony Optimization (ACO), Simulated Annealing (SA), and Particle Swarm Optimization (PSO) and their resulting joint ranking. The findings show that pod weight can be explained with a high coefficient of determination (R2 = 0.78–0.93) and low root-mean-square error (RMSE = 940–1369 kg/ha) for two years of data. Seed length yield assessment resulted in higher accuracies (R2 = 0.83–0.98) and lower errors (RMSE = 4.245–6.018 mm). Among optimization models used, ACO and SA outperformed others and the SAM vegetation detection approach showed improved results when compared to the RENDVI approach when dense canopies were being examined. Wavelengths at 450, 500, 520, 650, 700, and 760 nm, were identified in almost all data sets and harvest stage models used. The period between 44–55 days after planting (DAP) the optimal time period for yield assessment. Future work should involve transferring the learned concepts to a multispectral system, for eventual operational use; further attention should also be paid to seed length as a ground truth data collection technique, since this yield indicator is far more rapid and straightforward. 
    more » « less