skip to main content


Title: Broadcasting Real-Time Flows in Integrated Backhaul and Access 5G Networks
We study the problem of broadcasting realtime flows in multi-hop wireless networks. We assume that each packet has a stringent deadline, and each node in the network obtains some utility based on the number of packets delivered to it on time for each flow. We propose a distributed protocol called the delegated-set routing (DSR) that incurs virtually no overhead of coordination among nodes. We also develop distributed algorithms that aim to maximize the total timely utility under DSR. The utility of the DSR protocol and distributed algorithms are demonstrated by both theoretical analysis and simulation results. We show that our algorithms achieve higher timely throughput even when compared against centralized throughput optimal policies that do not consider deadline constraints.  more » « less
Award ID(s):
1719384
NSF-PAR ID:
10160755
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
17th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks, WIOPT 2019
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study exploration in stochastic multi-armed bandits when we have access to a divisible resource that can be allocated in varying amounts to arm pulls. We focus in particular on the allocation of distributed computing resources, where we may obtain results faster by allocating more resources per pull, but might have reduced throughput due to nonlinear scaling. For example, in simulation-based scientific studies, an expensive simulation can be sped up by running it on multiple cores. This speed-up however, is partly offset by the communication among cores, which results in lower throughput than if fewer cores were allocated to run more trials in parallel. In this paper, we explore these trade-offs in two settings. First, in a fixed confidence setting, we need to find the best arm with a given target success probability as quickly as possible. We propose an algorithm which trades off between information accumulation and throughput and show that the time taken can be upper bounded by the solution of a dynamic program whose inputs are the gaps between the sub-optimal and optimal arms. We also prove a matching hardness result. Second, we present an algorithm for a fixed deadline setting, where we are given a time deadline and need to maximize the probability of finding the best arm. We corroborate our theoretical insights with simulation experiments that show that the algorithms consistently match or outperform baseline algorithms on a variety of problem instances. 
    more » « less
  2. Abstract

    Forward genetics is used to identify the genetic basis for a phenotype. The approach involves identifying a mutant organism exhibiting a phenotype of interest and then mapping the causative locus or gene. Bulked‐segregant analysis (BSA) is a quick and effective approach to map mutants using pools of mutants and wild‐type plants from a segregating population to identify linkage of the mutant phenotype, and this approach has been successfully used in plants. Traditional linkage mapping approaches are outdated and time intensive, and can be very difficult. With the highly evolved development and reduction in cost of high‐throughput sequencing, this new approach combined with BSA has become extremely effective in multiple plant species, includingZea mays(maize). While the approach is incredibly powerful, careful experimental design, bioinformatic mapping techniques, and interpretation of results are important to obtain the desired results in an effective and timely manner. Poor design of a mapping population, limitations in bioinformatic experience, and inadequate understanding of sequence data are limitations of these approaches for the researcher. Here, we describe a straightforward protocol for mapping mutations responsible for a phenotype of interest in maize, using high‐throughput sequencing and BSA. Specifically, we discuss relevant aspects of developing a mutant mapping population. This is followed by a detailed protocol for DNA preparation and analysis of short‐read sequences to map and identify candidate causative mutations responsible for the mutant phenotype of interest. We provide command‐line and perl scripts to complete the bioinformatic analysis of the mutant sequence data. This protocol lays out the design of the BSA, bioinformatic approaches, and interpreting the sequencing data. These methods are very adaptable to any forward genetics experiment and provide a step‐by‐step approach to identifying the genetic basis of a maize mutant phenotype. © 2022 The Authors. Current Protocols published by Wiley Periodicals LLC.

    Basic Protocol: Bulked‐segregant analysis and high‐throughput sequencing to map maize mutants

     
    more » « less
  3. A blockchain is a database of sequential events that is maintained by a distributed group of nodes. A key consensus problem in blockchains is that of determining the next block (data element) in the sequence. Many blockchains address this by electing a new node to propose each new block. The new block is (typically) appended to the tip of the proposer’s local blockchain, and subsequently broadcast to the rest of the network. Without network delay (or adversarial behavior), this procedure would give a perfect chain, since each proposer would have the same view of the blockchain. A major challenge in practice is forking. Due to network delays, a proposer may not yet have the most recent block, and may therefore create a side chain that branches from the middle of the main chain. Forking reduces throughput, since only one a single main chain can survive, and all other blocks are discarded. We propose a new P2P protocol for blockchains called Barracuda, in which each proposer, prior to proposing a block, polls ℓ other nodes for their local blocktree information. Under a stochastic network model, we prove that this lightweight primitive improves throughput as if the entire network were a factor of ℓ faster. We provide guidelines on how to implement Barracuda in practice, guaranteeing robustness against several real-world factors. 
    more » « less
  4. null (Ed.)
    We consider the problem of serving real-time flows over a multi-hop wireless network. Each flow is composed of packets that have strict deadlines, and the goal is to maximize the weighted timely throughput of the system. Consistent with recent developments using mm-wave communications, we assume that the links are directional, but are lossy, and have unknown probabilities of successful packet transmission. An average link utilization budget (similar to a power constraint) constrains the system. We pose the problem in the form of a Constrained Markov Decision Process (CMDP) with an unknown transition kernel. We use a duality approach to decompose the problem into an inner unconstrained MDP with link usage costs, and an outer linkcost update step. For the inner MDP, we develop modelbased reinforcement learning algorithms that sample links by sending packets to learn the link statistics. While the first algorithm type samples links at will at the beginning and constructs the model, the second type is an online approach that can only use packets from flows to sample links that they traverse. The approach to the outer problem follows gradient descent. We characterize the sample complexity (number of packets transmitted) to obtain near-optimal policies, to show that a basic online approach has a poorer sample complexity bound, it can be modified to obtain an online algorithm that has excellent empirical performance. 
    more » « less
  5. Fault-tolerant coordination services have been widely used in distributed applications in cloud environments. Recent years have witnessed the emergence of time-sensitive applications deployed in edge computing environments, which introduces both challenges and opportunities for coordination services. On one hand, coordination services must recover from failures in a timely manner. On the other hand, edge computing employs local networked platforms that can be exploited to achieve timely recovery. In this work, we first identify the limitations of the leader election and recovery protocols underlying Apache ZooKeeper, the prevailing open-source coordination service. To reduce recovery latency from leader failures, we then design RT-Zookeeper with a set of novel features including a fast-convergence election protocol, a quorum channel notification mechanism, and a distributed epoch persistence protocol. We have implemented RT-Zookeeper based on ZooKeeper version 3.5.8. Empirical evaluation shows that RT-ZooKeeper achieves 91% reduction in maximum recovery latency in comparison to ZooKeeper. Furthermore, a case study demonstrates that fast failure recovery in RT-ZooKeeper can benefit a common messaging service like Kafka in terms of message latency. 
    more » « less