skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2312834

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The cellular network has undergone rapid progress since its inception in 1980s. While rapid iteration of newer generations of cellular technology plays a key role in this evolution, the incremental and eventually wide deployment of every new technology generation also plays a vital role in delivering the promised performance improvement. In this work, we conduct the first metamorphosis study of a cellular network generation, 5G, by measuring the user-experienced 5G performance from 5G network’s birth (initial deployment) to maturity (steady state). By analyzing a 4-year 5G performance trace of 2.65M+ Ookla® Speedtest Intelligence® measurements collected in 9 cities in the United States and Europe from January 2020 to December 2023, we unveil the detailed evolution of 5G coverage, throughput, and latency at the quarterly granularity, compare the performance diversity across the 9 representative cities, and gain insights into compounding factors that affect user-experienced 5G performance, such as adoption of 5G devices and the load on the 5G network. Our study uncovers the typical life-cycle of a new cellular technology generation as it undergoes its “growing pain” towards delivering its promised QoE over the previous technology generation. 
    more » « less
    Free, publicly-accessible full text available October 15, 2026
  2. In 2022, 3 years after the initial 5G rollout, through a cross-country US driving trip (from Los Angeles to Boston), the authors of [28] conducted an in-depth measurement study of user-perceived experience (network coverage, performance, and QoE of a set of major 5G “killer” apps) over all three major US carriers. The study revealed disappointingly low 5G coverage and suboptimal network performance – falling short of the expectations needed to support the new generation of 5G "killer apps. Now, five years into the 5G era, widely considered its midlife, 5G networks are expected to deliver stable and mature performance. In this work, we replicate the 2022 study along the same coast-to-coast route, evaluating the current state of cellular coverage and network and application performance across all three major US operators. While we observe a substantial increase in 5G coverage and a corresponding boost in network performance, two out of three operators still exhibit less than 50% 5G coverage along the driving route even five years after the initial 5G rollout. We expand the scope of the previous work by analyzing key lower-layer KPIs that directly influence the network performance. Finally, we introduce a head-to-head comparison with Starlink’s LEO satellite network to assess whether emerging non-terrestrial networks (NTNs) can complement the terrestrial cellular infrastructure in the next generation of wireless connectivity. 
    more » « less
    Free, publicly-accessible full text available July 28, 2026
  3. With rapid evolution of mobile core network (MCN) architectures, large-scale control-plane traffic (CPT) traces are critical to studying MCN design and performance optimization by the R&D community. The prior-art control-plane traffic generator SMM heavily relies on domain knowledge which requires re-design as the domain evolves. In this work, we study the feasibility of developing a high-fidelity MCN control plane traffic generator by leveraging generative ML models. We identify key challenges in synthesizing high-fidelity CPT including generic (to data-plane) requirements such as multimodality feature relationships and unique requirements such as stateful semantics and long-term (time-of-day) data variations. We show state-of-the-art, generative adversarial network (GAN)-based approaches shown to work well for data-plane traffic cannot meet these fidelity requirements of CPT, and develop a transformer-based model, CPT-GPT, that accurately captures complex dependencies among the samples in each traffic stream (control events by the same UE) without the need for GAN. Our evaluation of CPT-GPT on a large-scale control-plane traffic trace shows that (1) it does not rely on domain knowledge yet synthesizes control-plane traffic with comparable fidelity as SMM; (2) compared to the prior-art GAN-based approach, it reduces the fraction of streams that violate stateful semantics by two orders of magnitude, the max y-distance of sojourn time distributions of streams by 16.0%, and the transfer learning time in deriving new hourly models by 3.36×. 
    more » « less
    Free, publicly-accessible full text available November 4, 2025
  4. Networking research has witnessed a renaissance from exploring the seemingly unlimited predictive power of machine learning (ML) models. One such promising direction is throughput prediction – accurately predicting the network bandwidth or achievable throughput of a client in real time using ML models can enable a wide variety of network applications to proactively adapt their behavior to the changing network dynamics to potentially achieve significantly improved QoE. Motivated by the key role of newer generations of cellular networks in supporting the new generation of latency-critical applications such as AR/MR, in this work, we focus on accurate throughput prediction in cellular networks at fine time-scales, e.g., in the order of 100 ms. Through a 4-day, 1000+ km driving trip, we collect a dataset of fine-grained throughput measurements under driving across all three major US operators. Using the collected dataset, we conduct the first feasibility study of predicting fine-grained application throughput in real-world cellular networks with mixed LTE/5G technologies. Our analysis shows that popular ML models previously claimed to predict well for various wireless networks scenarios (e.g., WiFi or singletechnology network such as LTE only) do not predict well under app-centric metrics such as ARE95 and PARE10. Further, we uncover the root cause for the poor prediction accuracy of ML models as the inherent conflicting sample sequences in the fine-grained cellular network throughput data. 
    more » « less
  5. Networking research has witnessed a renaissance from exploring the seemingly unlimited predictive power of machine learning (ML) models. One such promising direction is throughput prediction – accurately predicting the network bandwidth or achievable throughput of a client in real time using ML models can enable a wide variety of network applications to proactively adapt their behavior to the changing network dynamics to potentially achieve significantly improved QoE. Motivated by the key role of newer generations of cellular networks in supporting the new generation of latency-critical applications such as AR/MR, in this work, we focus on accurate throughput prediction in cellular networks at fine time-scales, e.g., in the order of 100 ms. Through a 4-day, 1000+ km driving trip, we collect a dataset of fine-grained throughput measurements under driving across all three major US operators. Using the collected dataset, we conduct the first feasibility study of predicting fine-grained application throughput in real-world cellular networks with mixed LTE/5G technologies. Our analysis shows that popular ML models previously claimed to predict well for various wireless networks scenarios (e.g., WiFi or singletechnology network such as LTE only) do not predict well under app-centric metrics such as ARE95 and PARE10. Further, we uncover the root cause for the poor prediction accuracy of ML models as the inherent conflicting sample sequences in the finegrained cellular network throughput data. 
    more » « less
  6. With faster wireless networks and server GPUs, offloading high-accuracy but compute-intensive AR tasks implemented in Deep Neural Networks (DNNs) to edge servers offers a promising way to support high-QoE Augmented/Mixed Reality (AR/MR) applications. A cost-effective way for AR app vendors to deploy such edge-assisted AR apps to support a large user base is to use commercial Machine-Learning-as-a-Service (MLaaS) deployed at the edge cloud. To maximize cost-effectiveness, such an MLaaS provider faces a key design challenge, \ie how to maximize the number of clients concurrently served by each GPU server in its cluster while meeting per-client AR task accuracy SLAs. The above AR offloading inference serving problem differs from generic inference serving or video analytics serving in one fundamental way: due to the use of local tracking which reuses the last server-returned inference result to derive results for the current frame, the offloading frequency and end-to-end latency of each AR client directly affect its AR task accuracy (for all the frames). In this paper, we present ARISE, a framework that optimizes the edge server capacity in serving edge-assisted AR clients. Our design exploits the intricate interplay between per-client offloading schedule and batched inference on the server via proactively coordinating offloading request streams from different AR clients. Our evaluation using a large set of emulated AR clients and a 10-phone testbed shows that \name supports 1.7x--6.9x more clients compared to various baselines while keeping the per-client accuracy within the client-specified accuracy SLAs. 
    more » « less
  7. After a rapid deployment worldwide over the past few years, 5G is expected to have reached a mature deployment stage to provide measurable improvement of network performance and user experience over its predecessors. In this study, we aim to assess 5G deployment maturity via three conditions: (1) Does 5G performance remain stable over a long time span? (2) Does 5G provide better performance than its predecessor LTE? (3) Does the technology offer similar performance across diverse geographic areas and cellular operators? We answer this important question by conducting a cross-sectional, year-long measurement study of 5G uplink performance. Leveraging a custom Android App, we collected 5G uplink performance measurements (of critical importance to latency-critical apps) spanning 8 major cities in 7 countries and two different continents. Our measurements show that 5G deployment in major cities appears to have matured, with no major performance improvements observed over a one-year period, but 5G does not provide consistent, superior measurable performance over LTE, especially in terms of latency, and further there exists clear uneven 5G performance across the 8 cities. Our study suggests that, while 5G deployment appears to have stagnated, it is short of delivering its promised performance and user experience gain over its predecessor. 
    more » « less