skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Dissecting Latency in 360° Video Camera Sensing Systems
360° video camera sensing is an increasingly popular technology. Compared with traditional 2D video systems, it is challenging to ensure the viewing experience in 360° video camera sensing because the massive omnidirectional data introduce adverse effects on start-up delay, event-to-eye delay, and frame rate. Therefore, understanding the time consumption of computing tasks in 360° video camera sensing becomes the prerequisite to improving the system’s delay performance and viewing experience. Despite the prior measurement studies on 360° video systems, none of them delves into the system pipeline and dissects the latency at the task level. In this paper, we perform the first in-depth measurement study of task-level time consumption for 360° video camera sensing. We start with identifying the subtle relationship between the three delay metrics and the time consumption breakdown across the system computing task. Next, we develop an open research prototype Zeus to characterize this relationship in various realistic usage scenarios. Our measurement of task-level time consumption demonstrates the importance of the camera CPU-GPU transfer and the server initialization, as well as the negligible effect of 360° video stitching on the delay metrics. Finally, we compare Zeus with a commercial system to validate that our results are representative and can be used to improve today’s 360° video camera sensing systems.  more » « less
Award ID(s):
2151463
PAR ID:
10431508
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Sensors
Volume:
22
Issue:
16
ISSN:
1424-8220
Page Range / eLocation ID:
6001
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We demonstrate a video 360 navigation and streaming system for Mobile HMD devices. The Navigation Graph (NG) concept is used to predict future views that use a graph model that captures both temporal and spatial viewing behavior of prior viewers. Visualization of video 360 content navigation and view prediction algorithms is used for assessment of Quality of Experience (QoE) and evaluation of the accuracy of the NG-based view prediction algorithm. 
    more » « less
  2. Federated computing, including federated learning and federated analytics, needs to meet certain task Service Level Objective (SLO) in terms of various performance metrics, e.g., mean task response time and task tail latency. The lack of control and access to client activities requires a carefully crafted client selection process for each round of task processing to meet a designated task SLO. To achieve this, one must be able to predict task performance metrics for a given client selection per round of task execution. In this paper, we develop, FedSLO, a general framework that allows task performance in terms of a wide range of performance metrics of practical interest to be predicted for synchronous federated computing systems, in line with the Google federated learning system architecture. Specifically, with each task performance metric expressed as a cost function of the task response time, a relationship between the task performance measure - the mean cost and task/subtask response time distributions is established, allowing for unified task performance prediction algorithms to be developed. Practical issues concerning the computational complexity, measurement cost and implementation of FedSLO are also addressed. Finally, we propose preliminary ideas on how to apply FedSLO to the client selection process to enable task SLO guarantee. 
    more » « less
  3. Olanoff, D.; Johnson, K.; Spitzer, S. (Ed.)
    A key aspect of professional noticing includes attending to students’ mathematics (Jacobs et al., 2010). Initially, preservice teachers (PSTs) may attend to non-mathematics specific aspects of a classroom before attending to children’s procedures and then, eventually their conceptual reasoning (Barnhart & van Es, 2015). Use of 360 videos has been observed to increase the likelihood that PSTs will attend to more mathematics-specific student actions. This is due to an increased perceptual capacity, or the capacity of a representation to convey what is perceivable in a scenario (Kosko et al., in press). A 360 camera records a classroom omnidirectionally, allowing PSTs viewing the video to look in any direction. Moreover, several 360 cameras can be used in a single room to allow the viewer to move from one point in the recorded classroom to another; defined by Zolfaghari et al., 2020 as multi-perspective 360 video. Although multiperspective 360 has tremendous potential for immersion and presence (Gandolfi et al., 2021), we have not located empirical research clarifying whether or how this may affect PSTs’ professional noticing. Rather, most published research focuses on the use of a single camera. Given the dearth of research, we explored PSTs’ viewing of and teacher noticing related to a six-camera multiperspective 360 video. We examined 22 early childhood PSTs’ viewing of a 4th grade class using pattern blocks to find an equivalent fraction to 3/4. Towards the end of the video, one student suggested 8/12 as an equivalent fraction, but a peer claimed it was 9/12. The teacher prompts the peer to “prove it” and a brief discussion ensues before the video ends. After viewing the video, PSTs’ written noticings were solicited and coded. In our initial analysis, we examined whether PSTs attended to students’ fraction reasoning. Although many PSTs attended to whether 8/12 or 9/12 was the correct answer, only 7 of 22 attended to students’ part-whole reasoning of the fractions. Next, we examined the variance in how frequently PSTs switched their camera perspective using the unalikeability statistic. Unalikeability (U2) is a nonparametric measure of variance, ranging from 0 to 1, for nominal variables (Kader & Perry, 2007). Participants scores ranged from 0 to 0.80 (Median=0.47). We then compared participants’ U2 statistics for whether they attended (or not) to students mathematical reasoning in their written noticing. Findings revealed no statistically significant difference (U=38.5, p=0.316). On average, PSTs used 2-3 camera perspectives, and there was no observable benefit to using a higher number of cameras. These findings suggest that multiple perspectives may be useful for some, but not all PSTs’. 
    more » « less
  4. Smartphones have recently become a popular platform for deploying the computation-intensive virtual reality (VR) applications, such as immersive video streaming (a.k.a., 360-degree video streaming). One specific challenge involving the smartphone-based head mounted display (HMD) is to reduce the potentially huge power consumption caused by the immersive video. To address this challenge, we first conduct an empirical power measurement study on a typical smartphone immersive streaming system, which identifies the major power consumption sources. Then, we develop QuRate, a quality-aware and user-centric frame rate adaptation mechanism to tackle the power consumption issue in immersive video streaming. QuRate optimizes the immersive video power consumption by modeling the correlation between the perceivable video quality and the user behavior. Specifically, QuRate builds on top of the user’s reduced level of concentration on the video frames during view switching and dynamically adjusts the frame rate without impacting the perceivable video quality. We evaluate QuRate with a comprehensive set of experiments involving 5 smartphones, 21 users, and 6 immersive videos using empirical user head movement traces. Our experimental results demonstrate that QuRate is capable of extending the smartphone battery life by up to 1.24X while maintaining the perceivable video quality during immersive video streaming. Also, we conduct an Institutional Review Board (IRB)- approved subjective user study to further validate the minimum video quality impact caused by QuRate. 
    more » « less
  5. Bulterman_Dick; Kankanhalli_Mohan; Muehlhaueser_Max; Persia_Fabio; Sheu_Philip; Tsai_Jeffrey (Ed.)
    The emergence of 360-video streaming systems has brought about new possibilities for immersive video experiences while requiring significantly higher bandwidth than traditional 2D video streaming. Viewport prediction is used to address this problem, but interesting storylines outside the viewport are ignored. To address this limitation, we present SAVG360, a novel viewport guidance system that utilizes global content information available on the server side to enhance streaming with the best saliency-captured storyline of 360-videos. The saliency analysis is performed offline on the media server with powerful GPU, and the saliency-aware guidance information is encoded and shared with clients through the Saliency-aware Guidance Descriptor. This enables the system to proactively guide users to switch between storylines of the video and allow users to follow or break guided storylines through a novel user interface. Additionally, we present a viewing mode prediction algorithms to enhance video delivery in SAVG360. Evaluation of user viewport traces in 360-videos demonstrate that SAVG360 outperforms existing tiled streaming solutions in terms of overall viewport prediction accuracy and the ability to stream high-quality 360 videos under bandwidth constraints. Furthermore, a user study highlights the advantages of our proactive guidance approach over predicting and streaming of where users look. 
    more » « less