skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on August 5, 2026

Title: 360LIVECAST: A Low-latency and Bandwidth-efficient Multicast Framework for Live 360 Video
360 video streaming presents unique challenges in bandwidth efficiency and motion-to-photon (MTP) latency, particularly for live multi-user scenarios. While viewport prediction (VP) has emerged as the dominant solution, its effectiveness in live streaming is limited by training data scarcity and the unpredictability of live content. We present 360LIVECAST, the first practical multicast framework for live 360 video that eliminates the need for VP through two key innovations: (1) a novel viewport hull representation that combines current viewports with marginal regions, enabling local frame synthesis while reducing bandwidth by 60% compared to full panorama transmission, and (2) an viewport-specific hierarchical multicast framework leveraging edge computing to handle viewer dynamics while maintaining sub-25ms MTP latency. Extensive evaluation using real-world network traces and viewing trajectories demonstrates that 360LIVECAST achieves 26.9% lower latency than VP-based approaches while maintaining superior scalability.  more » « less
Award ID(s):
2140645 2106592 1900875
PAR ID:
10635563
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Page Range / eLocation ID:
1 to 5
Subject(s) / Keyword(s):
360 video multicast view-based multicast live broadcast viewport prediction
Format(s):
Medium: X
Location:
San Jose, California
Sponsoring Org:
National Science Foundation
More Like this
  1. Bulterman_Dick; Kankanhalli_Mohan; Muehlhaueser_Max; Persia_Fabio; Sheu_Philip; Tsai_Jeffrey (Ed.)
    The emergence of 360-video streaming systems has brought about new possibilities for immersive video experiences while requiring significantly higher bandwidth than traditional 2D video streaming. Viewport prediction is used to address this problem, but interesting storylines outside the viewport are ignored. To address this limitation, we present SAVG360, a novel viewport guidance system that utilizes global content information available on the server side to enhance streaming with the best saliency-captured storyline of 360-videos. The saliency analysis is performed offline on the media server with powerful GPU, and the saliency-aware guidance information is encoded and shared with clients through the Saliency-aware Guidance Descriptor. This enables the system to proactively guide users to switch between storylines of the video and allow users to follow or break guided storylines through a novel user interface. Additionally, we present a viewing mode prediction algorithms to enhance video delivery in SAVG360. Evaluation of user viewport traces in 360-videos demonstrate that SAVG360 outperforms existing tiled streaming solutions in terms of overall viewport prediction accuracy and the ability to stream high-quality 360 videos under bandwidth constraints. Furthermore, a user study highlights the advantages of our proactive guidance approach over predicting and streaming of where users look. 
    more » « less
  2. Low-latency is a critical user Quality-of-Experience (QoE) metric for live video streaming. It poses significant challenges for streaming over the Internet. In this paper, we explore the design space of low-latency live video streaming by developing dynamic models and optimal control strategies. We further develop practical live video streaming algorithms within the Model Predictive Control (MPC) framework, namely MPC-Live, to maximize user QoE by adapting the video bitrate while maintaining low end-to-end video latency in dynamic network environment. Through extensive experiments driven by real network traces, we demonstrate that our live video streaming algorithms can improve the performance dramatically within latency range of two to five seconds. 
    more » « less
  3. Striking a balance between minimizing bandwidth consumption and maintaining high visual quality stands as the paramount objective in volumetric content delivery. However, achieving this ambitious target is a substantial challenge, especially for mobile devices with constrained computational resources, given the voluminous amount of 3D data to be streamed, strict latency requirements, and high computational load. Inspired by the advantages offered by neural radiance fields (NeRF), we propose, for the first time, to deliver volumetric videos by utilizing neural-based content representations. We delve deep into potential challenges and explore viable solutions for both video-on-demand (VOD) and live video streaming services, in terms of the end-to-end pipeline, real-time and high-quality streaming, rate adaptation, and viewport adaptation. Our preliminary results lend credence to the feasibility of our research proposition, offering a promising starting point for further investigation. 
    more » « less
  4. 360-degree video is becoming an integral part of our content consumption through both video on demand and live broadcast services. However, live broadcast is still challenging due to the huge network bandwidth cost if all 360-degree views are delivered to a large viewer population over diverse networks. In this paper, we present 360BroadView, a viewer management approach to viewport prediction in 360-degree video live broadcast. We make some highbandwidth network viewers be leading viewers to help the others (lagging viewers) predict viewports during 360-degree video viewing and save bandwidth. Our viewer management maintains the leading viewer population despite viewer churns during live broadcast, so that the system keeps functioning properly. Our evaluation shows that 360BroadView maintains the leading viewer population at a minimal yet necessary level for 97 percent of the time. 
    more » « less
  5. In recent years, streamed 360° videos have gained popularity within Virtual Reality (VR) and Augmented Reality (AR) applications. However, they are of much higher resolutions than 2D videos, causing greater bandwidth consumption when streamed. This increased bandwidth utilization puts tremendous strain on the network capacity of the cloud providers streaming these videos. In this paper, we introduce L3BOU, a novel, three-tier distributed software framework that reduces cloud-edge bandwidth in the backhaul network and lowers average end-to-end latency for 360° video streaming applications. The L3BOU framework achieves low bandwidth and low latency by leveraging edge-based, optimized upscaling techniques. L3BOU accomplishes this by utilizing down-scaled MPEG-DASH-encoded 360° video data, known as Ultra Low Resolution (ULR) data, that the L3BOU edge applies distributed super-resolution (SR) techniques on, providing a high quality video to the client. L3BOU is able to reduce the cloud-edge backhaul bandwidth by up to a factor of 24, and the optimized super-resolution multi-processing of ULR data provides a 10-fold latency decrease in super resolution upscaling at the edge. 
    more » « less