skip to main content


Title: A stochastic-statistical residential burglary model with independent Poisson clocks
Residential burglary is a social problem in every major urban area. As such, progress has been to develop quantitative, informative and applicable models for this type of crime: (1) the Deterministic-time-step (DTS) model [Short, D’Orsogna, Pasour, Tita, Brantingham, Bertozzi & Chayes (2008) Math. Models Methods Appl. Sci. 18 , 1249–1267], a pioneering agent-based statistical model of residential burglary criminal behaviour, with deterministic time steps assumed for arrivals of events in which the residential burglary aggregate pattern formation is quantitatively studied for the first time; (2) the SSRB model (agent-based stochastic-statistical model of residential burglary crime) [Wang, Zhang, Bertozzi & Short (2019) Active Particles , Vol. 2 , Springer Nature Switzerland AG, in press], in which the stochastic component of the model is theoretically analysed by introduction of a Poisson clock with time steps turned into exponentially distributed random variables. To incorporate independence of agents, in this work, five types of Poisson clocks are taken into consideration. Poisson clocks (I), (II) and (III) govern independent agent actions of burglary behaviour, and Poisson clocks (IV) and (V) govern interactions of agents with the environment. All the Poisson clocks are independent. The time increments are independently exponentially distributed, which are more suitable to model individual actions of agents. Applying the method of merging and splitting of Poisson processes, the independent Poisson clocks can be treated as one, making the analysis and simulation similar to the SSRB model. A Martingale formula is derived, which consists of a deterministic and a stochastic component. A scaling property of the Martingale formulation with varying burglar population is found, which provides a theory to the finite size effects . The theory is supported by quantitative numerical simulations using the pattern-formation quantifying statistics. Results presented here will be transformative for both elements of application and analysis of agent-based models for residential burglary or in other domains.  more » « less
Award ID(s):
1737770 1737925
NSF-PAR ID:
10138642
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
European Journal of Applied Mathematics
ISSN:
0956-7925
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Deterministic compartmental models for infectious diseases give the mean behaviour of stochastic agent-based models. These models work well for counterfactual studies in which a fully mixed large-scale population is relevant. However, with finite size populations, chance variations may lead to significant departures from the mean. In real-life applications, finite size effects arise from the variance of individual realizations of an epidemic course about its fluid limit. In this article, we consider the classical stochastic Susceptible-Infected-Recovered (SIR) model, and derive a martingale formulation consisting of a deterministic and a stochastic component. The deterministic part coincides with the classical deterministic SIR model and we provide an upper bound for the stochastic part. Through analysis of the stochastic component depending on varying population size, we provide a theoretical explanation of finite size effects. Our theory is supported by quantitative and direct numerical simulations of theoretical infinitesimal variance. Case studies of coronavirus disease 2019 (COVID-19) transmission in smaller populations illustrate that the theory provides an envelope of possible outcomes that includes the field data.

     
    more » « less
  2. Capacity management, whether it involves servers in a data center, or human staff in a call center, or doctors in a hospital, is largely about balancing a resource-delay tradeoff. On the one hand, one would like to turn off servers when not in use (or send home staff that are idle) to save on resources. On the other hand, one wants to avoid the considerable setup time required to turn an ''off'' server back ''on.'' This paper aims to understand the delay component of this tradeoff, namely, what is the effect of setup time on average delay in a multi-server system? Surprisingly little is known about the effect of setup times on delay. While there has been some work on studying the M/M/k with Exponentially-distributed setup times, these works provide only iterative methods for computing mean delay, giving little insight as to how delay is affected by k , by load, and by the setup time. Furthermore, setup time in practice is much better modeled by a Deterministic random variable, and, as this paper shows, the scaling effect of a Deterministic setup time is nothing like that of an Exponentially-distributed setup time. This paper provides the first analysis of the M/M/k with Deterministic setup times. We prove a lower bound on the effect of setup on delay, where our bound is highly accurate for the common case where the setup time is much higher than the job service time. Our result is a relatively simple algebraic formula which provides insights on how delay scales with the input parameters. Our proof uses a combination of renewal theory, martingale arguments and novel probabilistic arguments, providing strong intuition on the transient behavior of a system that turns servers on and off. 
    more » « less
  3. Systems engineering processes coordinate the efforts of many individuals to design a complex system. However, the goals of the involved individuals do not necessarily align with the system-level goals. Everyone, including managers, systems engineers, subsystem engineers, component designers, and contractors, is self-interested. It is not currently understood how this discrepancy between organizational and personal goals affects the outcome of complex systems engineering processes. To answer this question, we need a systems engineering theory that accounts for human behavior. Such a theory can be ideally expressed as a dynamic hierarchical network game of incomplete information. The nodes of this network represent individual agents and the edges the transfer of information and incentives. All agents decide independently on how much effort they should devote to a delegated task by maximizing their expected utility; the expectation is over their beliefs about the actions of all other individuals and the moves of nature. An essential component of such a model is the quality function, defined as the map between an agent’s effort and the quality of their job outcome. In the economics literature, the quality function is assumed to be a linear function of effort with additive Gaussian noise. This simplistic assumption ignores two critical factors relevant to systems engineering: (1) the complexity of the design task, and (2) the problem-solving skills of the agent. Systems engineers establish their beliefs about these two factors through years of job experience. In this paper, we encode these beliefs in clear mathematical statements about the form of the quality function. Our approach proceeds in two steps: (1) we construct a generative stochastic model of the delegated task, and (2) we develop a reduced order representation suitable for use in a more extensive game-theoretic model of a systems engineering process. Focusing on the early design stages of a systems engineering process, we model the design task as a function maximization problem and, thus, we associate the systems engineer’s beliefs about the complexity of the task with their beliefs about the complexity of the function being maximized. Furthermore, we associate an agent’s problem solving-skills with the strategy they use to solve the underlying function maximization problem. We identify two agent types: “naïve” (follows a random search strategy) and “skillful” (follows a Bayesian global optimization strategy). Through an extensive simulation study, we show that the assumption of the linear quality function is only valid for small effort levels. In general, the quality function is an increasing, concave function with derivative and curvature that depend on the problem complexity and agent’s skills. 
    more » « less
  4. Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD. 
    more » « less
  5. We consider the problem of spectrum sharing by multiple cellular operators. We propose a novel deep Reinforcement Learning (DRL)-based distributed power allocation scheme which utilizes the multi-agent Deep Deterministic Policy Gradient (MA-DDPG) algorithm. In particular, we model the base stations (BSs) that belong to the multiple operators sharing the same band, as DRL agents that simultaneously determine the transmit powers to their scheduled user equipment (UE) in a synchronized manner. The power decision of each BS is based on its own observation of the radio environment (RF) environment, which consists of interference measurements reported from the UEs it serves, and a limited amount of information obtained from other BSs. One advantage of the proposed scheme is that it addresses the single-agent non-stationarity problem of RL in the multi-agent scenario by incorporating the actions and observations of other BSs into each BS's own critic which helps it to gain a more accurate perception of the overall RF environment. A centralized-training-distributed-execution framework is used to train the policies where the critics are trained over the joint actions and observations of all BSs while the actor of each BS only takes the local observation as input in order to produce the transmit power. Simulation with the 6 GHz Unlicensed National Information Infrastructure (U-NII)-5 band shows that the proposed power allocation scheme can achieve better throughput performance than several state-of-the-art approaches. 
    more » « less