skip to main content


Title: Analysis of Joint Scheduling and Power Control for Predictable URLLC in Industrial Wireless Networks
Wireless networks are being applied in various industrial sectors, and they are posed to support mission-critical industrial IoT applications which require ultra-reliable, low-latency communications (URLLC). Ensuring predictable per-packet communication reliability is a basis of predictable URLLC, and scheduling and power control are two basic enablers. Scheduling and power control, however, are subject to challenges such as harsh environments, dynamic channels, and distributed network settings in industrial IoT. Existing solutions are mostly based on heuristic algorithms or asymptotic analysis of network performance, and there lack field-deployable algorithms for ensuring predictable per-packet reliability. Towards addressing the gap, we examine the cross-layer design of joint scheduling and power control and analyze the associated challenges. We introduce the Perron–Frobenius theorem to demonstrate that scheduling is a must for ensuring predictable communication reliability, and by investigating characteristics of interference matrices, we show that scheduling with close-by links silent effectively constructs a set of links whose required reliability is feasible with proper transmission power control. Given that scheduling alone is unable to ensure predictable communication reliability while ensuring high throughput and addressing fast-varying channel dynamics, we demonstrate how power control can help improve both the reliability at each time instant and throughput in the long-term. Based on the analysis, we propose a candidate framework of joint scheduling and power control, and we demonstrate how this framework behaves in guaranteeing per-packet communication reliability in the presence of wireless channel dynamics of different time scales. Collectively, these findings provide insight into the cross-layer design of joint scheduling and power control for ensuring predictable per-packet reliability in the presence of wireless network dynamics and uncertainties.  more » « less
Award ID(s):
1827211 1821962
NSF-PAR ID:
10110517
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE International Conference on Industrial Internet (ICII)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Cellular networks with D2D links are increasingly being explored for mission-critical applications (e.g., real-time control and AR/VR) which require predictable communication reliability. Thus it is critical to control interference among concurrent transmissions in a predictable manner to ensure the required communication reliability. To this end, we propose a Unified Cellular Scheduling (UCS) framework that, based on the Physical-Ratio-K (PRK) interference model, schedules uplink, downlink, and D2D transmissions in a unified manner to ensure predictable communication reliability while maximizing channel spatial reuse. UCS also provides a simple, effective approach to mode selection that maximizes the communication capacity for each involved communication pair. UCS effectively uses multiple channels for high throughput as well as resilience to channel fading and external interference. Leveraging the availability of base stations (BSes) as well as high-speed, out-of-band connectivity between BSes, UCS effectively orchestrates the functionalities of BSes and user equipment (UE) for light-weight control signaling and ease of incremental deployment and integration with existing cellular standards. We have implemented UCS using the open-source, standards-compliant cellular networking platform OpenAirInterface, and we have validated the UCS design and implementation using the USRP B210 software-defined radios in the ORBIT wireless testbed. We have also evaluated UCS through high-fidelity, at-scale simulation studies; we observe that UCS ensures predictable communication reliability while achieving a higher channel spatial reuse rate than existing mechanisms, and that the distributed UCS framework enables a channel spatial reuse rate statistically equal to that in the state-of-the-art centralized scheduling algorithm iOrder. 
    more » « less
  2. null (Ed.)
    The concept of Industry 4.0 introduces the unification of industrial Internet-of-Things (IoT), cyber physical systems, and data-driven business modeling to improve production efficiency of the factories. To ensure high production efficiency, Industry 4.0 requires industrial IoT to be adaptable, scalable, real-time, and reliable. Recent successful industrial wireless standards such as WirelessHART appeared as a feasible approach for such industrial IoT. For reliable and real-time communication in highly unreliable environments, they adopt a high degree of redundancy. While a high degree of redundancy is crucial to real-time control, it causes a huge waste of energy, bandwidth, and time under a centralized approach and are therefore less suitable for scalability and handling network dynamics. To address these challenges, we propose DistributedHART—a distributed real-time scheduling system for WirelessHART networks. The essence of our approach is to adopt local (node-level) scheduling through a time window allocation among the nodes that allows each node to schedule its transmissions using a real-time scheduling policy locally and online. DistributedHART obviates the need of creating and disseminating a central global schedule in our approach, thereby significantly reducing resource usage and enhancing the scalability. To our knowledge, it is the first distributed real-time multi-channel scheduler for WirelessHART. We have implemented DistributedHART and experimented on a 130-node testbed. Our testbed experiments as well as simulations show at least 85% less energy consumption in DistributedHART compared to existing centralized approach while ensuring similar schedulability. 
    more » « less
  3. With the rapid growth of Internet of Things (IoT) applications in recent years, there is a strong need for wireless uplink scheduling algorithms that determine when and which subset of a large number of users should transmit to the central controller. Different from the downlink case, the central controller in the uplink scenario typically has very limited information about the users. On the other hand, collecting all such information from a large number of users typically incurs a prohibitively high communication overhead. This motivates us to investigate the development of an efficient and low-overhead uplink scheduling algorithm that is suitable for large-scale IoT applications with limited amount of coordination from the central controller. Specifically, we first characterize a capacity outer bound subject to the sampling constraint where only a small subset of users are allowed to use control channels for system state reporting and wireless channel probing. Next, we relax the sampling constraint and propose a joint sampling and transmission algorithm, which utilizes full knowledge of channel state distributions and instantaneous queue lengths to achieve the capacity outer bound. The insights obtained from this capacity-achieving algorithm allow us to develop an efficient and low-overhead scheduling algorithm that can strictly satisfy the sampling constraint with asymptotically diminishing throughput loss. Moreover, the throughput performance of our proposed algorithm is independent of the number of users, a highly desirable property in large-scale IoT systems. Finally, we perform extensive simulations to validate our theoretical results. 
    more » « less
  4. null (Ed.)
    Emerging Industrial Internet-of-Things systems require wireless solutions to connect sensors, actuators, and controllers as part of high data rate feedback-control loops over real-time flows. A key challenge is to provide predictable performance and agility in response to fluctuations in link quality, variable workloads, and topology changes. We propose WARP to address this challenge. WARP uses programs to specify a network’s behavior and includes a synthesis procedure to automatically generate such programs from a high-level specification of the system’s workload and topology. WARP has three unique features: (1) WARP uses a domain-specific language to specify stateful programs that include conditional statements to control when a flow’s packets are transmitted. The execution paths of programs depend on the pattern of packet losses observed at runtime, thereby enabling WARP to readily adapt to packet losses due to short-term variations in link quality. (2) Our synthesis technique uses heuristics to improve network performance by considering multiple packet loss patterns and associated execution paths when determining the transmissions performed by nodes. Furthermore, the generated programs ensure that the likelihood of a flow delivering its packets by its deadline exceeds a user-specified threshold. (3) WARP can adapt to workload and topology changes without explicitly reconstructing a network’s program based on the observation that nodes can independently synthesize the same program when they share the same workload and topology information. Simulations show that WARP improves network throughput for data collection, dissemination, and mixed workloads on two realistic topologies. Testbed experiments show that WARP reduces the time to add new flows by 5 times over a state-of-the-art centralized control plane and guarantees the real-time and reliability of all flows. 
    more » « less
  5. Embedded and real-time devices in many domains are increasingly dependent on network connectivity. The ability to offload computations encourages Cost, Size, Weight and Power (C-SWaP) optimizations, while coordination over the network effectively enables systems to sense the environment beyond their own local sensors, and to collaborate globally. The promise is significant: Autonomous Vehicles (AVs) coordinating with each other through infrastructure, factories aggregating data for global optimization, and power-constrained devices leveraging offloaded inference tasks. Low-latency wireless (e.g., 5G) technologies paired with the edge cloud, are further enabling these trends. Unfortunately, computation at the edge poses significant challenges due to the challenging combination of limited resources, required high performance, security due to multi-tenancy, and real-time latency. This paper introduces Edge-RT, a set of OS extensions for the edge designed to meet the end-to-end (packet reception to transmission) deadlines across chains of computations. It supports strong security by executing a chain per-client device, thus isolating tenant and device computations. Despite a practical focus on deadlines and strong isolation, it maintains high system efficiency. To do so, Edge-RT focuses on per-packet deadlines inherited by the computations that operate on it. It introduces mechanisms to avoid per-packet system overheads, while trading only bounded impacts on predictable scheduling. Results show that compared to Linux and EdgeOS, Edge-RT can both maintain higher throughput and meet significantly more deadlines both for systems with bimodal workloads with utilization above 60%, in the presence of malicious tasks, and as the system scales up in clients. 
    more » « less