skip to main content


Title: An In-Depth Measurement Analysis of 5G mmWave PHY Latency and Its Impact on End-to-End Delay
5G aims to offer not only significantly higher throughput than previous generations of cellular networks, but also promises millisecond (ms) and sub-millisecond (ultra-)low latency support at the 5G physical (PHY) layer for future applications. While prior measurement studies have confirmed that commercial 5G deployments can achieve up to several Gigabits per second (Gbps) throughput (especially with the mmWave 5G radio), are they able to deliver on the (sub) millisecond latency promise? With this question in mind, we conducted to our knowledge the first in-depth measurement study of commercial 5G mmWave PHY latency using detailed physical channel events and messages. Through carefully designed experiments and data analytics, we dissect various factors that influence 5G PHY latency of both downlink and uplink data transmissions, and explore their impacts on end-to-end delay. We find that while in the best cases, the 5G (mmWave) PHY-layer is capable of delivering ms/sub-ms latency (with a minimum of 0.09 ms for downlink and 0.76 ms for uplink), these happen rarely. A variety of factors such as channel conditions, re-transmissions, physical layer control and scheduling mechanisms, mobility, and application (edge) server placement can all contribute to increased 5G PHY latency (and thus end-to-end (E2E) delay). Our study provides insights to 5G vendors, carriers as well as application developers/content providers on how to better optimize or mitigate these factors for improved 5G latency performance.  more » « less
Award ID(s):
2220286
NSF-PAR ID:
10447762
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
In Passive and Active Measurement Conference (PAM) 2023.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The highly anticipated 5G mmWave technology promises to enable many uplink-oriented, latency-critical applications (LCAs) such as Augmented Reality and Connected Autonomous Vehicles. Nonetheless, recent measurement studies have largely focused on its downlink performance. In thiswork,we perform a systematic study of the uplink performance of commercial 5G mmWave networks across 3 major US cities and 2 mobile operators. Our study makes three contributions. (1) It reveals that 5G mmWave uplink performance is geographically diverse, substantially higher over LTE in terms of bandwidth and latency, but often erratic and suboptimal, which can degrade LCA performance. (2) Our analysis of control messages and PHY-level KPIs shows that the root causes for the suboptimal performance are fundamental to 5G mmWave and cannot be easily fixed via simple tuning of network configurations. (3) We identify various design and deployment optimizations that 5G operators can explore to bring 5G mmWave performance to the level needed to ultimately support the LCAs. 
    more » « less
  2. Augmented Reality (AR) has been widely hailed as a representative of ultra-high bandwidth and ultra-low latency apps that will be enabled by 5G networks. While single-user AR can perform AR tasks locally on the mobile device, multi-user AR apps, which allow multiple users to interact within the same physical space, critically rely on the cellular network to support user interactions. However, a recent study showed that multi-user AR apps can experience very high end-to-end latency when running over LTE, rendering user interaction practically infeasible. In this paper, we study whether 5G mmWave, which promises significant bandwidth and latency improvements over LTE, can support multi-user AR by conducting an in-depth measurement study of the same popular multi-user AR app over both LTE and 5G mmWave. Our measurement and analysis show that: (1) The E2E AR latency over LTE is significantly lower compared to the values reported in the previous study. However, it still remains too high for practical user interaction. (2) 5G mmWave brings no benefits to multi-user AR apps. (3) While 5G mmWave reduces the latency of the uplink visual data transmission, there are other components of the AR app that are independent of the network technology and account for a significant fraction of the E2E latency. (4) The app drains 66% more network energy, which translates to 28% higher total energy over 5G mmWave compared to over LTE. 
    more » « less
  3. The continuous increase in demanding for availability and ultra-reliability of low-latency and broadband wireless connections is instigating further research in the standardization of next-generation mobile systems. 6G networks, among other benefits, should offer global ubiquitous mobility thanks to the utilization of the Space segment as an intelligent yet autonomous ecosystem. In this framework, multi-layered networks will take charge of providing connectivity by implementing Cloud-Radio Access Network (C-RAN) functionalities on heterogeneous nodes distributed over aerial and orbital segments. Unmanned Aerial Vehicles (UAVs), High-Altitude Plat-forms (HAPs), and small satellites compose the Space ecosystem encompassing the 3D networks. Recently, a lot of interest has been raised about splitting operations to distribute baseband processing functionalities among such nodes to balance the computational load and reduce the power consumption. This work focuses on the hardware development of C-RAN physical (PHY-) layer operations to derive their computational and energy demand. More in detail, the 5G Downlink Shared Channel (DLSCH) and the Physical Downlink Shared Channel (PDSCH) are first simulated in MATLAB environment to evaluate the variation of computational load depending on the selected splitting options and number of antennas available at transmitter (TX) and receiver (RX) side. Then, the PHY-layer processing chain is software-implemented and the various splitting options are tested on low-cost processors, such as Raspberry Pi (RP) 3B+ and 4B. By overclocking the RPs, we compute the execution time and we derive the instruction count (IC) per program for each considered splitting option so to achieve the mega instructions per second (MIPS) for the expected processing time. Finally, by comparing the performance achieved by the employed RPs with that of Nvidia Jetson Nano (JN) processor used as benchmark, we shall discuss about size, weight, power and cost (SWaP-C)... 
    more » « less
  4. Cellular networks with D2D links are increasingly being explored for mission-critical applications (e.g., real-time control and AR/VR) which require predictable communication reliability. Thus it is critical to control interference among concurrent transmissions in a predictable manner to ensure the required communication reliability. To this end, we propose a Unified Cellular Scheduling (UCS) framework that, based on the Physical-Ratio-K (PRK) interference model, schedules uplink, downlink, and D2D transmissions in a unified manner to ensure predictable communication reliability while maximizing channel spatial reuse. UCS also provides a simple, effective approach to mode selection that maximizes the communication capacity for each involved communication pair. UCS effectively uses multiple channels for high throughput as well as resilience to channel fading and external interference. Leveraging the availability of base stations (BSes) as well as high-speed, out-of-band connectivity between BSes, UCS effectively orchestrates the functionalities of BSes and user equipment (UE) for light-weight control signaling and ease of incremental deployment and integration with existing cellular standards. We have implemented UCS using the open-source, standards-compliant cellular networking platform OpenAirInterface, and we have validated the UCS design and implementation using the USRP B210 software-defined radios in the ORBIT wireless testbed. We have also evaluated UCS through high-fidelity, at-scale simulation studies; we observe that UCS ensures predictable communication reliability while achieving a higher channel spatial reuse rate than existing mechanisms, and that the distributed UCS framework enables a channel spatial reuse rate statistically equal to that in the state-of-the-art centralized scheduling algorithm iOrder. 
    more » « less
  5. In this paper, we propose a multi-band medium access control (MAC) protocol for an infrastructure-based network with an access point (AP) that supports In-Band full-duplex (IBFD) and multiuser transmission to multi-band-enabled stations. The Multi-Band Full Duplex MAC (MB-FDMAC) protocol mainly uses the sub-6 GHz band for control-frame exchange, transmitted at the lowest rate per IEEE 802.11 standards, and uses the 60 GHz band, which has significantly higher instantaneous bandwidth, exclusively for data-frame exchange. We also propose a selection method that ensures fairness among uplink and downlink stations. Our result shows that MB-FDMAC effectively improves the spectral efficiency in the mmWave band by 324%, 234%, and 189% compared with state-of-the-art MAC protocols. In addition, MB-FDMAC significantly outperforms the combined throughput of sub-6 GHz and 60 GHz IBFD multiuser MIMO networks that operate independently by more than 85%. In addition, we study multiple network variables such as the number of stations in the network, the percentage of mmWave band stations, the size of the contention stage, and the selection method on MB-FDMAC by evaluating the change in the throughput, packet delay, and fairness among stations. Finally, we propose a method to improve the utilization of the high bandwidth of the mmWave band by incorporating time duplexing into MB-FDMAC, which we show can enhance the fairness by 12.5%and significantly reduces packet delay by 80%. 
    more » « less