Virtual Reality (VR), together with the network infrastructure, can provide an interactive and immersive experience for multiple users simultaneously and thus enables collaborative VR applications (e.g., VR-based classroom). However, the satisfactory user experience requires not only high-resolution panoramic image rendering but also extremely low latency and seamless user experience. Besides, the competition for limited network resources (e.g., multiple users share the total limited bandwidth) poses a significant challenge to collaborative user experience, in particular under the wireless network with time-varying capacities. While existing works have tackled some of these challenges, a principled design considering all those factors is still missing. In this paper, we formulate a combinatorial optimization problem to maximize the Quality of Experience (QoE), defined as the linear combination of the quality, the average VR content delivery delay, and variance of the quality over a finite time horizon. In particular, we incorporate the influence of imperfect motion prediction when considering the quality of the perceived contents. However, the optimal solution to this problem can not be implemented in real-time since it relies on future decisions. Then, we decompose the optimization problem into a series of combinatorial optimization in each time slot and develop a low-complexity algorithm that can achieve at least 1/2 of the optimal value. Despite this, the trace-based simulation results reveal that our algorithm performs very close to the optimal offline solution. Furthermore, we implement our proposed algorithm in a practical system with commercial mobile devices and demonstrate its superior performance over state-of-the-art algorithms. We open-source our implementations on https://github.com/SNeC-Lab-PSU/ICDCS-CollaborativeVR.
more »
« less
A Dual-Objective Bandit-Based Opportunistic Band Selection Strategy for Hybrid-Band V2X Metaverse Content Update
As vehicular communication networks embrace metaverse beyond 5G/6G systems, the rich content update via the least interfered subchannel of the optimal frequency band in a hybrid band vehicle to everything (V2X) setting emerges as a challenging optimization problem. We model this problem as a tradeoff between multi-band VR/AR devices attempting to perform metaverse scenes and environmental updates to metaverse roadside units (MRSUs) while minimizing energy consumption. Due to the computational hardness of this optimization, we formulate an opportunistic band selection problem using a multi-armed bandit (MAB) that provides a good quality solution in real-time without computationally burdening the already stretched augmented/virtual reality (AR/VR) units acting as transmitting nodes. The opportunistic use of scheduling rich content updates at traffic signals and stand-still scenarios maps well with the formulated bandit problem. We propose a Dual-Objective Minimax Optimal Stochastic Strategy (DOMOSS) as a natural solution to this problem. Through extensive computer-based simulations, we demonstrate the effectiveness of our proposal in contrast to baselines and comparable solutions. We also verify the quality of our solution and the convergence of the proposed strategy.
more »
« less
- Award ID(s):
- 2210252
- PAR ID:
- 10516298
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 979-8-3503-1090-0
- Page Range / eLocation ID:
- 6880 to 6885
- Subject(s) / Keyword(s):
- Performance evaluation Energy consumption Metaverse Computational modeling Proposals Optimization Vehicle-to-everything Metaverse Content Update Radio Frequency (RF) Visible Light Communication (VLC) Hybrid Band Allocation (HBA) MAB MOSS
- Format(s):
- Medium: X
- Location:
- Kuala Lumpur, Malaysia
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Dual-connectivity streaming is a key enabler of next generation six Degrees Of Freedom (6DOF) Virtual Reality (VR) scene immersion. Indeed, using conventional sub-6 GHz WiFi only allows to reliably stream a low-quality baseline representation of the VR content, while emerging high-frequency communication technologies allow to stream in parallel a high-quality user viewport-specific enhancement representation that synergistically integrates with the baseline representation, to deliver high-quality VR immersion. We investigate holistically as part of an entire future VR streaming system two such candidate emerging technologies, Free Space Optics (FSO) and millimeter-Wave (mmWave) that benefit from a large available spectrum to deliver unprecedented data rates. We analytically characterize the key components of the envisioned dual-connectivity 6DOF VR streaming system that integrates in addition edge computing and scalable 360° video tiling, and we formulate an optimization problem to maximize the immersion fidelity delivered by the system, given the WiFi and mmWave/FSO link rates, and the computing capabilities of the edge server and the users’ VR headsets. This optimization problem is mixed integer programming of high complexity and we formulate a geometric programming framework to compute the optimal solution at low complexity. We carry out simulation experiments to assess the performance of the proposed system using actual 6DOF navigation traces from multiple mobile VR users that we collected. Our results demonstrate that our system considerably advances the traditional state-of-the-art and enables streaming of 8K-120 frames-per-second (fps) 6DOF content at high fidelity.more » « less
-
null (Ed.)We investigate a novel communications system that integrates scalable multi-layer 360-degree video tiling, viewport-adaptive rate-distortion optimal resource allocation, and VR-centric edge computing and caching, to enable future high-quality untethered VR streaming. Our system comprises a collection of 5G small cells that can pool their communication, computing, and storage resources to collectively deliver scalable 360-degree video content to mobile VR clients at much higher quality. Our major contributions are rigorous design of multi-layer 360-degree tiling and related models of statistical user navigation, and analysis and optimization of edge-based multi-user VR streaming that integrates viewport adaptation and server cooperation. We also explore the possibility of network coded data operation and its implications for the analysis, optimization, and system performance we pursue here. We demonstrate considerable gains in delivered immersion fidelity, featuring much higher 360-degree viewport peak signal to noise ratio (PSNR) and VR video frame rates and spatial resolutions.more » « less
-
Augmented/virtual reality (AR/VR) technologies can be deployed in a household environment for applications such as checking the weather or traffic reports, watching a summary of news, or attending classes. Since AR/VR applications are highly delay sensitive, delivering these types of reports in maximum quality could be very challenging. In this paper, we consider that users go through a series of AR/VR experience units that can be delivered at different experience quality levels. In order to maximize the quality of the experience while minimizing the cost of delivering it, we aim to predict the users’ behavior in the home and the experiences they are interested in at specific moments in time. We describe a deep learning based technique to predict the users’ requests from AR/VR devices and optimize the local caching of experience units. We evaluate the performance of the proposed technique on two real-world datasets and compare our results with other baselines. Our results show that predicting users’ requests can improve the quality of experience and decrease the cost of delivery.more » « less
-
Battery life is an increasingly urgent challenge for today's untethered VR and AR devices. However, the power efficiency of head-mounted displays is naturally at odds with growing computational requirements driven by better resolution, refresh rate, and dynamic ranges, all of which reduce the sustained usage time of untethered AR/VR devices. For instance, the Oculus Quest 2, under a fully-charged battery, can sustain only 2 to 3 hours of operation time. Prior display power reduction techniques mostly target smartphone displays. Directly applying smartphone display power reduction techniques, however, degrades the visual perception in AR/VR with noticeable artifacts. For instance, the "power-saving mode" on smartphones uniformly lowers the pixel luminance across the display and, as a result, presents an overall darkened visual perception to users if directly applied to VR content. Our key insight is that VR display power reduction must be cognizant of the gaze-contingent nature of high field-of-view VR displays. To that end, we present a gaze-contingent system that, without degrading luminance, minimizes the display power consumption while preserving high visual fidelity when users actively view immersive video sequences. This is enabled by constructing 1) a gaze-contingent color discrimination model through psychophysical studies, and 2) a display power model (with respect to pixel color) through real-device measurements. Critically, due to the careful design decisions made in constructing the two models, our algorithm is cast as a constrained optimization problem with a closed-form solution, which can be implemented as a real-time, image-space shader. We evaluate our system using a series of psychophysical studies and large-scale analyses on natural images. Experiment results show that our system reduces the display power by as much as 24% (14% on average) with little to no perceptual fidelity degradation.more » « less