skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Availability Aware Online Virtual Network Function Backup in Edge Environments
With the rapid advancement of edge computing and network function virtualization, it is promising to provide flexible and low-latency network services at the edge. However, due to the vulnerability of edge services and the volatility of edge computing system states, i.e., service request rates, failure rates, and resource prices, it is challenging to minimize the online service cost while providing the availability guarantee. This paper considers the problem of online virtual network function backup under availability constraints (OVBAC) for cost minimization in edge environments. We formulate the problem based on the characteristics of the volatility system states derived from real-world data and show the hardness of the formulated problem. We use an online backup deployment scheme named Drift-Plus-Penalty (DPP) with provable near-optimal performance for the AVBAC problem. In particular, DPP needs to solve an integer programming problem at the beginning of each time slot. We propose a dynamic programming-based algorithm that can optimally solve the problem in pseudo-polynomial time. Extensive real-world data-driven simulations demonstrate that DPP significantly outperforms popular baselines used in practice.  more » « less
Award ID(s):
1717731
PAR ID:
10472609
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Mobile Computing
ISSN:
1536-1233
Page Range / eLocation ID:
1 to 14
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Supporting smooth movement of mobile clients is important when offloading services on an edge computing platform. Interruption free client mobility demands seamless migration of the offloading service to nearby edge servers. However, fast migration of offloading services across edge servers in a WAN environment poses significant challenges to the handoff service design. In this paper, we present a novel service handoff system which seamlessly migrates offloading services to the nearest edge server, while the mobile client is moving. Service handoff is achieved via container migration. We identify an important performance problem during Docker container migration. Based on our systematic study of container layer management and image stacking, we propose a migration method which leverages the layered storage system to reduce file system synchronization overhead, without dependence on the distributed file system. We implement a prototype system and conduct experiments using real world product applications. Evaluation results reveal that compared to state-of-the-art service handoff systems designed for edge computing platforms, our system reduces the total duration of service handoff time by 80% (56%) with network bandwidth 5Mbps (20Mbps). 
    more » « less
  2. Mobile Edge Computing may become a prevalent platform to support applications where mobile devices have limited compute, storage, energy and/or data privacy concerns. In this paper, we study the efficient provisioning and man- agement of compute resources in the Edge-to-Cloud continuum for different types of real-time applications with timeliness requirements depending on application-level update rates and communication/compute delays. We begin by introducing a highly stylized network model allowing us to study the salient features of this problem including its sensitivity to compute vs. communication costs, application requirements, and traffic load variability. We then propose an online decentralized service placement algorithm, based on estimating network delays and adapting application update rates, which achieves high service availability. Our results exhibit how placement can be optimized and how a load-balancing strategy c 
    more » « less
  3. The networking industry is offering new services leveraging recent technological advances in connectivity, storage, and computing such as mobile communications and edge computing. In this regard, extended reality, a term encompassing virtual reality, augmented reality, and mixed reality, can provide unprecedented user experience and pioneering service opportunities such as: live concerts, sports, and other events; interactive gaming and entertainment; immersive education, training, and demos. These services require high-bandwidth, low-latency, and reliable connections, and are supported by next-generation ultra-reliable and low-latency communications in the vision of 6G mobile communication systems. In this work, we devise a novel scheme, called backup from different data centers with multicast and adaptive bandwidth provisioning, to admit reliable, low-latency, and high-bandwidth extended reality live streams in next-generation networks. We consider network services where contents are non-cacheable and investigate how backup services can be offered by different data centers with multicast and adaptive bandwidth provisioning. Our proposed service-provisioning scheme provides protection not only against link failures in the physical network but also against computing and storage failures in data centers. We develop scalable algorithms for the service-provisioning scheme and evaluate their performance on various complex network instances in a dynamic environment. Numerical results show that, compared to conventional service-provisioning schemes such as those seeking backup services from the same data center, our proposed service-provisioning scheme efficiently utilizes network resources, ensures higher reliability, and guarantees low latency; hence, it is highly suitable for extended reality live streams. 
    more » « less
  4. The Internet of Things (IoT) requires distributed, large scale data collection via geographically distributed devices. While IoT devices typically send data to the cloud for processing, this is problematic for bandwidth constrained applications. Fog and edge computing (processing data near where it is gathered, and sending only results to the cloud) has become more popular, as it lowers network overhead and latency. Edge computing often uses devices with low computational capacity, therefore service frameworks and middleware are needed to efficiently compose services. While many frameworks use a top-down perspective, quality of service is an emergent property of the entire system and often requires a bottom up approach. We define services as multi-modal, allowing resource and performance tradeoffs. Different modes can be composed to meet an application's high level goal, which is modeled as a function. We examine a case study for counting vehicle traffic through intersections in Nashville. We apply object detection and tracking to video of the intersection, which must be performed at the edge due to privacy and bandwidth constraints. We explore the hardware and software architectures, and identify the various modes. This paper lays the foundation to formulate the online optimization problem presented by the system which makes tradeoffs between the quantity of services and their quality constrained by available resources. 
    more » « less
  5. As edge computing complements the cloud to enable computational services right at the network edge, federated learning (FL) can also benefit from close-by edge computing infrastructure. However, most prior works on federated edge learning (FEL) mainly focus on one shared global model during the federated training in edge systems. In a real edge computing scenario, there may co-exist multiple various FL models that are owned by different entities and used by different applications. Simultaneously training these models competes both computing and networking resources in the shared edge system. Therefore, in this work, we consider a multi-model federated edge learning where multiple FEL models are being trained in the edge network and edge servers can act as either parameter servers or workers of these FEL models. We formulate a joint participant selection and learning scheduling problem, which is a non-linear mixed-integer program, aiming to minimize the total cost of all FEL models while satisfying the desired convergence rate of trained FEL models and the constrained edge resources. We then design several algorithms by decoupling the original problem into two or three sub-problems which can be solved respectively and iteratively. Extensive simulations with real-world training datasets and FEL models show that our proposed algorithms can efficiently reduce the average total cost of all FEL models in a multi-model FEL setting compared with existing algorithms. 
    more » « less