Burst-parallel serverless applications invoke thousands of short-lived distributed functions to complete complex jobs such as data analytics, video encoding, or compilation. While these tasks execute in seconds, starting and configuring the virtual network they rely on is a major bottleneck that can consume up to 84% of total startup time. In this paper we characterize the magnitude of this network cold start problem in three popular overlay networks, Docker Swarm, Weave, and Linux Overlay. We focus on end-to-end startup time that encompasses both the time to boot a group of containers as well as interconnecting them. Our primary observation is that existing overlay approaches for serverless networking scale poorly in short-lived serverless environments. Based on our findings we develop Particle, a network stack tailored for multi-node serverless overlay networks that optimizes network creation without sacrificing multi-tenancy, generality, or throughput. When integrated into a serverless burst-parallel video processing pipeline, Particle improves application runtime by 2.4--3X over existing overlays.
more »
« less
Fast Function Instantiation with Alternate Virtualization Approaches
This paper focuses on the need for emerging domains such as serverless and in-network computing, where applications are often hosted on virtualized compute instances (e.g., containers and unikernels), to have applications startup as quickly as possible. We provide a qualitative and quantitative analysis of containers and unikernels with regard to the startup time. We analyze these in-depth and identify the key components and their impact under scale on the startup latency. We study how startup time scales as we launch multiple instances concurrently. We study the contribution of popular Container Networking Interfaces (CNIs), to the startup time.
more »
« less
- Award ID(s):
- 1763929
- PAR ID:
- 10299324
- Date Published:
- Journal Name:
- 2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN)
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Recent work has shown that lightweight virtualization like Docker containers can be used in HPC to package applications with their runtime environments. In many respects, applications in containers perform similarly to native applications. Other work has shown that containers can have adverse effects on the latency variation of communications with the enclosed application. This latency variation may have an impact on the performance of some HPC workloads, especially those dependent on synchronization between processes. In this work, we measure the latency characteristics of messages between Docker containers, and then compare those measurements to the performance of real-world applications. Our specific goals are to: measure the changes in mean and variation of latency with Docker containers, study how this affects the synchronization time of MPI processes, and measure the impact of these factors on realworld applications such as the NAS Parallel Benchmark (NPB).more » « less
-
5G edge clouds promise a pervasive computational infrastructure a short network hop away, enabling a new breed of smart devices that respond in real-time to their physical surroundings. Unfortunately, today’s operating system designs fail to meet the goals of scalable isolation, dense multi-tenancy, and high performance needed for such applications. In this paper we introduce EdgeOS that emphasizes system-wide isolation as fine-grained as per-client. We propose a novel memory movement accelerator architecture that employs data copying to enforce strong isolation without performance penalties. To support scalable isolation, we introduce a new protection domain implementation that offers lightweight isolation, fast startup and low latency even under high churn. We implement EdgeOS in a microkernel based OS and demonstrate running high scale network middleboxes using the Click software router and endpoint applications such as memcached, a TLS proxy, and neural network inference. We reduce startup latency by 170X compared to Linux processes, and improve latency by three orders of magnitude when running 300 to 1000 edge-cloud memcached instances on one server.more » « less
-
With close to native performance, Linux containers are becoming the de facto platform for cloud computing. While various solutions have been proposed to secure applications and containers in the cloud environment by leveraging Intel SGX, most cloud operators do not yet offer SGX as a service. This is likely due to a number of security, scalability, and usability concerns coming from both cloud providers and users. Cloud operators worry about the security guarantees of unofficial SDKs, limited support for remote attestation within containers, limited physical memory for the Enclave Page Cache (EPC) making it difficult to support hundreds of enclaves, and potential DoS attacks against EPC by malicious users. Meanwhile, end users need to worry about careful program partitioning to reduce the TCB and adapting legacy applications to use SGX. We note that most of these concerns are the result of an incomplete infrastructure, from the OS to the application layer. We address these concerns with lxcsgx, which allows SGX applications to run inside containers while also: enabling SGX remote attestation for containerized applications, enforcing EPC memory usage control on a per-container basis, providing a general software TPM using SGX to augment legacy applications, and supporting partitioning with a GCC plugin. We then retrofit Nginx/OpenSSL and Memcached using the software TPM and SGX partitioning to defend against known and potential attacks. Thanks to the small EPC footprint of each enclave, we are able to run up to 100 containerized Memcached instances without EPC swapping. Our evaluation shows the overhead introduced by lxcsgx is less than 6.9% for simple SGX applications, 9.5% for Nginx/OpenSSL, and 20.9% for containerized Memcached.more » « less
-
null (Ed.)Efficient high-conversion-ratio power delivery is needed for many portable computing applications which require sub-volt supply rails but operate from batteries or USB power sources. In such applications, the power management unit should have a small volume, area, and height while providing fast transient response. Past work has shown favorable performance of hybrid switched-capacitor (SC) converters to reduce the size of needed inductor(s), which can soft-charge high-density SC networks while supporting efficient voltage regulation [1-5]. However, the hybrid approach has its own challenges including balancing the voltage of the flying capacitor and achieving safe but fast startup. Rapid supply transients, including startup, can cause voltage stress on power switches if flying capacitors are not quickly regulated. Past approaches such as precharge networks [3] or fast balancing control [5] have startup times that are on the order of milliseconds. This paper presents a two-stage cascaded hybrid SC converter that features a fast transient response with automatic flying capacitor balancing for low-voltage applications (i.e., 5V:0.4 to 1.2V from a USB interface). The converter is nearly standalone and all gate drive supplies are generated internally. Measured results show a peak efficiency of 96.9%, <; 36mV under/overshoot for 1A/μs load transients, and self-startup time on the order of 10μs (over 100× faster than previous works).more » « less