skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Super-Cloudlet: Rethinking Edge Computing in the Era of Open Optical Networks
Edge computing is an attractive architecture to efficiently provide compute resources to many applications that demand specific QoS requirements. The edge compute resources are in close geographical proximity to where the applications’ data originate from and/or are being supplied to, thus avoiding unnecessary back and forth data transmission with a data center far away. This paper describes a federated edge computing system in which compute resources at multiple edge sites are dynamically aggregated together to form distributed super-cloudlets and best respond to varying application-driven loads. In its simplest form a super-cloudlet consists of compute resources available at two edge computing sites or cloudlets that are (temporarily) interconnected by dedicated optical circuits deployed to enable low-latency and high-rate data exchanges. A super-cloudlet architecture is experimentally demonstrated over the largest public OpenROADM optical network testbed up to date consisting of commercial equipment from six suppliers. The software defined networking (SDN) PROnet Orchestrator is upgraded to both concurrently manage the resources offered by the optical network equipment, compute nodes, and associated Ethernet switches and achieve three key functionalities of the proposed super-cloudlet architecture, i.e., service placement, auto-scaling, and offloading.  more » « less
Award ID(s):
1956357
PAR ID:
10292097
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2021 International Conference on Computer Communications and Networks (ICCCN)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Edge computing has much lower elasticity than cloud computing because cloudlets have much smaller physical and electrical footprints than a data center. This hurts the scalability of applications that involve low-latency edge offload. We show how this problem can be addressed by leveraging the growing sophistication and compute capability of recent wearable devices. We investigate four Wearable Cognitive Assistance applications on three wearable devices, and show that the technique of offload shaping can significantly reduce network utilization and cloudlet load without compromising accuracy or performance. Our investigation considers the offload shaping strategies of mapping processes to different computing tiers, gating, and decluttering. We find that all three strategies offer a significant bandwidth savings compared to transmitting full camera images to a cloudlet. Two out of the three devices we test are capable of running all offload shaping strategies within a reasonable latency bound. 
    more » « less
  2. The convergence of 5G wireless networks and edge computing enables new edge-native applications that are simultaneously bandwidth-hungry, latency-sensitive, and compute-intensive. Examples include deeply immersive augmented reality, wearable cognitive assistance, privacy-preserving video analytics, edge-triggered serendipity, and autonomous swarms of featherweight drones. Such edge-native applications require network-aware and load-aware orchestration of resources across the cloud (Tier-1), cloudlets (Tier-2), and device (Tier-3). This paper describes the architecture of Sinfonia, an open-source system for such cross-tier orchestration. Key attributes of Sinfonia include: support for multiple vendor-specific Tier-1 roots of orchestration, providing end-to-end runtime control that spans technical and non-technical criteria; use of third-party Kubernetes clusters as cloudlets, with unified treatment of telco-managed, hyperconverged, and just-in-time variants of cloudlets; masking of orchestration complexity from applications, thus lowering the barrier to creation of new edge-native applications. We describe an initial release of Sinfonia ( https://github.com/cmusatyalab/sinfonia ), and share our thoughts on evolving it in the future. 
    more » « less
  3. With the emergence of IoT applications, 5G, and edge computing, network resource allocation has shifted toward the edge, bringing services closer to the end users. These applications often require communication with the core network for purposes that include cloud storage, compute offloading, 5G-and-Beyond transport communication between centralized unit (CU), distributed unit (DU) and core network, centralized network monitoring and management, etc. As the number of these services increases, efficient and reliable connectivity between the edge and core networks is of the essence. Wavelength Division Multiplexing (WDM) is a well-suited technology for transferring large amounts of data by simultaneously transmitting several wavelength-multiplexed data streams over each single fiber optics link. WDM is the technology of choice in mid-haul and long-haul transmission networks, including edge-to-core networks, to offer increased transport capacity. Optical networks are prone to failures of components such as network fiber links, sites, and transmission ports. A single network element failure alone can cause significant traffic loss due to the disruption of many active data flows. Thus, fault-tolerant and reliable network designs remain a priority. The architecture called “dual-hub and dual-spoke” is often used in metro area networks (MANs). A dual-hub, or in general a multi-hub network, consists of a set of designated destination nodes (hubs) in which the data traffic from all other nodes (the peripherals) should be directed to the hubs. Multiple hubs offer redundant connectivity to and from the core or wide area network (WAN) through geographical diversity. The routing of the connections (also known as lightpaths) between the peripheral node and the hubs has to be carefully computed to maximize path diversity across the edge-to-core network. This means that whenever possible the established redundant lightpaths must not contain a common Shared Risk Link Group (SRLG). An algorithm is proposed to compute the most reliable set of SRLG disjoint shortest paths from any peripheral to all hubs. The proposed algorithm can also be used to evaluate the overall edge-to-core network reliability quantified through a newly introduced figure of merit. 
    more » « less
  4. New breed of applications, such as autonomous driving and their need for computation-aided quick decision making has motivated the delegation of compute-intensive services (e.g., video analytic) to the more powerful surrogate machines at the network edge–edge computing (EC). Recently, the notion of pervasive edge computing (PEC) has emerged, in which users’ devices can join the pool of the computing resources that perform edge computing. Inclusion of users’ devices increases the computing capability at the edge (adding to the infrastructure servers), but in comparison to the conventional edge ecosystems, it also introduces new challenges, such as service orchestration (i.e., service placement, discovery, and migration). We propose uDiscover, a novel user-driven service discovery and utilization framework for the PEC ecosystem. In designing uDiscover, we considered the Named-Data Networking architecture for balancing users workloads and reducing user-perceived latency. We propose proactive and reactive service discovery approaches and assess their performance in PEC and infrastructure-only ecosystems. Our simulation results show that (i) the PEC ecosystem reduces the user-perceived delays by up to 70%, and (ii) uDiscover selects the most suitable server–"accurate" delay estimates with less than 10% error–to execute any given task. 
    more » « less
  5. Geo-distributed Edge sites are expected to cater to the stringent demands of situation-aware applications like collaborative autonomous vehicles and drone swarms. While clients of such applications benefit from having network-proximal compute resources, an Edge site has limited resources compared to the traditional Cloud. Moreover, the load experienced by an Edge site depends on a client's mobility pattern, which may often be unpredictable. The Function-as-a-Service (FaaS) paradigm is poised aptly to handle the ephemeral nature of workload demand at Edge sites. In FaaS, applications are decomposed into containerized functions enabling fine-grained resource management. However, spatio-temporal variations in client mobility can still lead to rapid saturation of resources beyond the capacity of an Edge site.To address this challenge, we develop FEO (Federated Edge Orchestrator), a resource allocation scheme across the geodistributed Edge infrastructure for FaaS. FEO employs a novel federated policy to offload function invocations to peer sites with spare resource capacity without the need to frequently share knowledge about available capacities among participating sites. Detailed experiments show that FEO's approach can reduce a site's P99 latency by almost 3x, while maintaining application service level objectives at all other sites. 
    more » « less