skip to main content


Title: Compute- and Data-Intensive Networks: The Key to the Metaverse
The worlds of computing, communication, and storage have for a long time been treated separately, and even the recent trends of cloud computing, distributed computing, and mobile edge computing have not funda-mentally changed the role of networks, still designed to move data between end users and pre-determined compu-tation nodes, without true optimization of the end-to-end compute-communication process. However, the emergence of Metaverse applications, where users consume multime-dia experiences that result from the real-time combination of distributed live sources and stored digital assets, has changed the requirements for, and possibilities of, systems that provide distributed caching, computation, and com-munication. We argue that the real-time interactive nature and high demands on data storage, streaming rates, and processing power of Metaverse applications will accelerate the merging of the cloud into the network, leading to highly-distributed tightly-integrated compute- and data-intensive networks becoming universal compute platforms for next-generation digital experiences. In this paper, we first describe the requirements of Metaverse applications and associated supporting infrastructure, including rele-vant use cases. We then outline a comprehensive cloud network flow mathematical framework, designed for the end-to-end optimization and control of such systems, and show numerical results illustrating its promising role for the efficient operation of Metaverse-ready networks.  more » « less
Award ID(s):
1816699
NSF-PAR ID:
10383150
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
1st International Conference on 6G Networking (6GNet)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Data-intensive augmented information (AgI) services (e.g., metaverse applications such as virtual/augmented reality), designed to deliver highly interactive experiences resulting from the real-time combination of live data-streams and pre-stored digital content, are accelerating the need for distributed compute platforms with unprecedented storage, computation, and communication requirements. To this end, the integrated evolution of next-generation networks (5G/6G) and distributed cloud technologies (mobile/edge/cloud computing) have emerged as a promising paradigm to address the interaction- and resource-intensive nature of data-intensive AgI services. In this paper, we focus on the design of control policies for the joint orchestration of compute, caching, and communication (3C) resources in next-generation 3C networks for the delivery of data-intensive AgI services. We design the first throughput-optimal control policy that coordinates joint decisions around (i) routing paths and processing locations for live data streams, with (ii) cache selection and distribution paths for associated data objects. We then extend the proposed solution to include a max-throughput data placement policy and two efficient replacement policies. Numerical results demonstrate the superior performance obtained via the novel multi-pipeline flow control and 3C resource orchestration mechanisms of the proposed policy, compared with state-of-the-art algorithms that lack full 3C integrated control. 
    more » « less
  2. Emerging distributed cloud architectures, e.g., fog and mobile edge computing, are playing an increasingly impor-tant role in the efficient delivery of real-time stream-processing applications (also referred to as augmented information services), such as industrial automation and metaverse experiences (e.g., extended reality, immersive gaming). While such applications require processed streams to be shared and simultaneously consumed by multiple users/devices, existing technologies lack efficient mechanisms to deal with their inherent multicast na-ture, leading to unnecessary traffic redundancy and network congestion. In this paper, we establish a unified framework for distributed cloud network control with generalized (mixed-cast) traffic flows that allows optimizing the distributed execution of the required packet processing, forwarding, and replication operations. We first characterize the enlarged multicast network stability region under the new control framework (with respect to its unicast counterpart). We then design a novel queuing system that allows scheduling data packets according to their current destination sets, and leverage Lyapunov drift-plus-penalty con-trol theory to develop the first fully decentralized, throughput-and cost-optimal algorithm for multicast flow control. Numerical experiments validate analytical results and demonstrate the performance gain of the proposed design over existing network control policies. 
    more » « less
  3. Next-generation distributed computing networks (e.g., edge and fog computing) enable the efficient delivery of delay-sensitive, compute-intensive applications by facilitating access to computation resources in close proximity to end users. Many of these applications (e.g., augmented/virtual reality) are also data-intensive: in addition to user-specific (live) data streams, they require access to shared (static) digital objects (e.g., im-age database) to complete the required processing tasks. When required objects are not available at the servers hosting the associated service functions, they must be fetched from other edge locations, incurring additional communication cost and latency. In such settings, overall service delivery performance shall benefit from jointly optimized decisions around (i) routing paths and processing locations for live data streams, together with (ii) cache selection and distribution paths for associated digital objects. In this paper, we address the problem of dynamic control of data-intensive services over edge cloud networks. We characterize the network stability region and design the first throughput-optimal control policy that coordinates processing and routing decisions for both live and static data-streams. Numerical results demonstrate the superior performance (e.g., throughput, delay, and resource consumption) obtained via the novel multi-pipeline flow control mechanism of the proposed policy, compared with state-of-the-art algorithms that lack integrated stream processing and data distribution control. 
    more » « less
  4. The proliferation of innovative mobile services such as augmented reality, networked gaming, and autonomous driving has spurred a growing need for low-latency access to computing resources that cannot be met solely by existing centralized cloud systems. Mobile Edge Computing (MEC) is expected to be an effective solution to meet the demand for low-latency services by enabling the execution of computing tasks at the network-periphery, in proximity to end-users. While a number of recent studies have addressed the problem of determining the execution of service tasks and the routing of user requests to corresponding edge servers, the focus has primarily been on the efficient utilization of computing resources, neglecting the fact that non-trivial amounts of data need to be stored to enable service execution, and that many emerging services exhibit asymmetric bandwidth requirements. To fill this gap, we study the joint optimization of service placement and request routing in MEC-enabled multi-cell networks with multidimensional (storage-computation-communication) constraints. We show that this problem generalizes several problems in literature and propose an algorithm that achieves close-to-optimal performance using randomized rounding. Evaluation results demonstrate that our approach can effectively utilize the available resources to maximize the number of requests served by low-latency edge cloud servers. 
    more » « less
  5. The development of communication technologies in edge computing has fostered progress across various applications, particularly those involving vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. Enhanced infrastructure has improved data transmission network availability, promoting better connectivity and data collection from IoT devices. A notable IoT application is with the Intelligent Transportation System (ITS). IoT technology integration enables ITS to access a variety of data sources, including those pertaining to weather and road conditions. Real-time data on factors like temperature, humidity, precipitation, and friction contribute to improved decision-making models. Traditionally, these models are trained at the cloud level, which can lead to communication and computational delays. However, substantial advancements in cloud-to-edge computing have decreased communication relays and increased computational distribution, resulting in faster response times. Despite these benefits, the developments still largely depend on central cloud sources for computation due to restrictions in computational and storage capacity at the edge. This reliance leads to duplicated data transfers between edge servers and cloud application servers. Additionally, edge computing is further complicated by data models predominantly based on data heuristics. In this paper, we propose a system that streamlines edge computing by allowing computation at the edge, thus reducing latency in responding to requests across distributed networks. Our system is also designed to facilitate quick updates of predictions, ensuring vehicles receive more pertinent safety-critical model predictions. We will demonstrate the construction of our system for V2V and V2I applications, incorporating cloud-ware, middleware, and vehicle-ware levels. 
    more » « less