skip to main content

This content will become publicly available on January 14, 2023

Title: A Markov Decision Process Framework for Efficient and Implementable Contact Tracing and Isolation
Efficient contact tracing and isolation is an effective strategy to control epidemics. It was used effectively during the Ebola epidemic and successfully implemented in several parts of the world during the ongoing COVID-19 pandemic. An important consideration in contact tracing is the budget on the number of individuals asked to quarantine -- the budget is limited for socioeconomic reasons. In this paper, we present a Markov Decision Process (MDP) framework to formulate the problem of using contact tracing to reduce the size of an outbreak while asking a limited number of people to quarantine. We formulate each step of the MDP as a combinatorial problem, MinExposed, which we demonstrate is NP-Hard; as a result, we develop an LP-based approximation algorithm. Though this algorithm directly solves MinExposed, it is often impractical in the real world due to information constraints. To this end, we develop a greedy approach based on insights from the analysis of the previous algorithm, which we show is more interpretable. A key feature of the greedy algorithm is that it does not need complete information of the underlying social contact network. This makes the heuristic implementable in practice and is an important consideration. Finally, we carry out experiments more » on simulations of the MDP run on real-world networks, and show how the algorithms can help in bending the epidemic curve while limiting the number of isolated individuals. Our experimental results demonstrate that the greedy algorithm and its variants are especially effective, robust, and practical in a variety of realistic scenarios, such as when the contact graph and specific transmission probabilities are not known. All code can be found in our GitHub repository: this https URL. « less
Authors:
; ; ; ; ; ;
Award ID(s):
1918656
Publication Date:
NSF-PAR ID:
10313646
Journal Name:
ArXivorg
ISSN:
2331-8422
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The 2018–2020 Ebola virus disease epidemic in Democratic Republic of the Congo (DRC) resulted in 3481 cases (probable and confirmed) and 2299 deaths. In this paper, we use a novel statistical method to analyze the individual-level incidence and hospitalization data on DRC Ebola victims. Our analysis suggests that an increase in the rate of quarantine and isolation that has shortened the infectiousness period by approximately one day during the epidemic’s third and final wave was likely responsible for the eventual containment of the outbreak. The analysis further reveals that the total effective population size or the average number ofmore »individuals at risk for the disease exposure in three epidemic waves over the period of 24 months was around 16,000–a much smaller number than previously estimated and likely an evidence of at least partial protection of the population at risk through ring vaccination and contact tracing as well as adherence to strict quarantine and isolation policies.« less
  2. Cycling as a green transportation mode has been promoted by many governments all over the world. As a result, constructing effective bike lanes has become a crucial task for governments promoting the cycling life style, as well-planned bike paths can reduce traffic congestion and decrease safety risks for both cyclists and motor vehicle drivers. Unfortunately, existing trajectory mining approaches for bike lane planning do not consider key realistic government constraints: 1) budget limitations, 2) construction convenience, and 3) bike lane utilization. In this paper, we propose a data-driven approach to develop bike lane construction plans based on large-scale real worldmore »bike trajectory data. We enforce these constraints to formulate our problem and introduce a flexible objective function to tune the benefit between coverage of the number of users and the length of their trajectories. We prove the NP-hardness of the problem and propose greedy-based heuristics to address it. Finally, we deploy our system on Microsoft Azure, providing extensive experiments and case studies to demonstrate the effectiveness of our approach.« less
  3. Diffusion of information in social network has been the focus of intense research in the recent past decades due to its significant impact in shaping public discourse through group/individual influence. Existing research primarily models influence as a binary property of entities: influenced or not influenced. While this is a useful abstraction, it discards the notion of degree of influence, i.e., certain individuals may be influenced ``more'' than others. We introduce the notion of \emph{attitude}, which, as described in social psychology, is the degree by which an entity is influenced by the information. Intuitively, attitude captures the number of distinct neighborsmore »of an entity influencing the latter. We present an information diffusion model (AIC model) that quantifies the degree of influence, i.e., attitude of individuals, in a social network. With this model, we formulate and study attitude maximization problem. We prove that the function for computing attitude is monotonic and sub-modular, and the attitude maximization problem is NP-Hard. We present a greedy algorithm for maximization with an approximation guarantee of $(1-1/e)$. In the context of AIC model, we study two problems, with the aim to investigate the scenarios where attaining individuals with high attitude is objectively more important than maximizing the attitude of the entire network. In the first problem, we introduce the notion of \emph{actionable attitude}; intuitively, individuals with actionable attitude are likely to ``act'' on their attained attitude. We show that the function for computing actionable attitude, unlike that for computing attitude, is non-submodular and however is \emph{approximately submodular}. We present approximation algorithm for maximizing actionable attitude in a network. In the second problem, we consider identifying the number of individuals in the network with attitude above a certain value, a threshold. In this context, the function for computing the number of individuals with attitude above a given threshold induced by a seed set is \emph{neither submodular nor supermodular}. We present heuristics for realizing the solution to the problem. We experimentally evaluated our algorithms and studied empirical properties of the attitude of nodes in network such as spatial and value distribution of high attitude nodes.« less
  4. One important class of applications entails a robot scrutinizing, monitoring, or recording the evolution of an uncertain time-extended process. This sort of situation leads to an interesting family of active perception problems that can be cast as planning problems in which the robot is limited in what it sees and must, thus, choose what to pay attention to. The distinguishing characteristic of this setting is that the robot has influence over what it captures via its sensors, but exercises no causal authority over the process evolving in the world. As such, the robot’s objective is to observe the underlying processmore »and to produce a “chronicle” of occurrent events, subject to a goal specification of the sorts of event sequences that may be of interest. This paper examines variants of such problems in which the robot aims to collect sets of observations to meet a rich specification of their sequential structure. We study this class of problems by modeling a stochastic process via a variant of a hidden Markov model and specify the event sequences of interest as a regular language, developing a vocabulary of “mutators” that enable sophisticated requirements to be expressed. Under different suppositions on the information gleaned about the event model, we formulate and solve different planning problems. The core underlying idea is the construction of a product between the event model and a specification automaton. Using this product, we compute a policy that minimizes the expected number of steps to reach a goal state. We introduce a general algorithm for this problem as well as several more efficient algorithms for important special cases. The paper reports and compares performance metrics by drawing on some small case studies analyzed in depth via simulation. Specifically, we study the effect of the robot’s observation model on the average time required for the robot to record a desired story. We also compare our algorithm with a baseline greedy algorithm, showing that our algorithm outperforms the greedy algorithm in terms of the average time to record a desired story. In addition, experiments show that the algorithms tailored to specialized variants of the problem are rather more efficient than the general algorithm.« less
  5. This paper focuses on a newly challenging setting in hard-label adversarial attacks on text data by taking the budget information into account. Although existing approaches can successfully generate adversarial examples in the hard-label setting, they follow an ideal assumption that the victim model does not restrict the number of queries. However, in real-world applications the query budget is usually tight or limited. Moreover, existing hard-label adversarial attack techniques use the genetic algorithm to optimize discrete text data by maintaining a number of adversarial candidates during optimization, which can lead to the problem of generating low-quality adversarial examples in the tight-budgetmore »setting. To solve this problem, in this paper, we propose a new method named TextHoaxer by formulating the budgeted hard-label adversarial attack task on text data as a gradient-based optimization problem of perturbation matrix in the continuous word embedding space. Compared with the genetic algorithm-based optimization, our solution only uses a single initialized adversarial example as the adversarial candidate for optimization, which significantly reduces the number of queries. The optimization is guided by a new objective function consisting of three terms, i.e., semantic similarity term, pair-wise perturbation constraint, and sparsity constraint. Semantic similarity term and pair-wise perturbation constraint can ensure the high semantic similarity of adversarial examples from both comprehensive text-level and individual word-level, while the sparsity constraint explicitly restricts the number of perturbed words, which is also helpful for enhancing the quality of generated text. We conduct extensive experiments on eight text datasets against three representative natural language models, and experimental results show that TextHoaxer can generate high-quality adversarial examples with higher semantic similarity and lower perturbation rate under the tight-budget setting.« less