Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available November 20, 2025
-
Free, publicly-accessible full text available October 1, 2025
-
IoT devices used in various applications, such as monitoring agricultural soil moisture, or urban air quality assessment, are typically battery-operated and energy-constrained. We develop a lightweight and distributed cooperative sensing scheme that provides energy-efficient sensing of an area by reducing spatio-temporal overlaps in the coverage using a multi-sensor IoT network. Our “Sensing Together” solution includes two algorithms: Distributed Task Adaptation (DTA) and Distributed Block Scheduler (DBS), which coordinate the sensing operations of the IoT network through information shared using a distributed “token passing” protocol. DTA adapts the sensing rates from their “raw” values (optimized for each IoT device independently) to minimize spatial redundancy in coverage, while ensuring that a desired coverage threshold is met at all points in the covered area. DBS then schedules task execution times across all IoT devices in a distributed manner to minimize temporal overlap. On-device evaluation shows a small token size and execution times of less than 0.6s on average while simulations show average energy savings of 5% per IoT device under various weather conditions. Moreover, when devices had more significant coverage overlaps, energy savings exceeded 30% thanks to cooperative sensing. In simulations of larger networks, energy savings range on average between 3.34% and 38.53%, depending on weather conditions. Our solutions consistently demonstrate near-optimal performance under various scenarios, showcasing their capability to efficiently reduce temporal overlap during sensing task scheduling.more » « lessFree, publicly-accessible full text available September 23, 2025
-
Data centers require high-performance and efficient networking for fast and reliable communication between applications. TCP/IP-based networking still plays a dominant role in data center networking to support a wide range of Layer-4 and Layer-7 applications, such as middleboxes and cloud-based microservices. However, traditional kernel-based TCP/IP stacks face performance challenges due to overheads such as context switching, interrupts, and copying. We present Z-stack, a high-performance userspace TCP/IP stack with a zero-copy design. Utilizing DPDK's Poll Mode Driver, Z-stack bypasses the kernel and moves packets between the NIC and the protocol stack in userspace, eliminating the overhead associated with kernel-based processing. Z-stack em-ploys polling-based packet processing that improves performance under high loads, and eliminates receive livelocks compared to interrupt-driven packet processing. With its zero-copy socket design, Z-stack eliminates copies when moving data between the user application and the protocol stack, which further minimizes latency and improves throughput. In addition, Z-stack seamlessly integrates with shared memory processing within the node, eliminating duplicate protocol processing and serializationldese-rialization overheads for intra-node communication. Z-stack uses F-stack as the starting point which integrates the proven TCP/IP stack from FreeBSD, providing a versatile solution for a variety of cloud use cases and improving performance of data center networking.more » « lessFree, publicly-accessible full text available July 10, 2025
-
Federated Learning (FL) typically involves a large-scale, distributed system with individual user devices/servers training models locally and then aggregating their model updates on a trusted central server. Existing systems for FL often use an always-on server for model aggregation, which can be inefficient in terms of resource utilization. They also may be inelastic in their resource management. This is particularly exacerbated when aggregating model updates at scale in a highly dynamic environment with varying numbers of heterogeneous user devices/servers. We present LIFL, a lightweight and elastic serverless cloud platform with fine-grained resource management for efficient FL aggregation at scale. LIFL is enhanced by a streamlined, event-driven serverless design that eliminates the individual, heavyweight message broker and replaces inefficient container-based sidecars with lightweight eBPF-based proxies. We leverage shared memory processing to achieve high-performance communication for hierarchical aggregation, which is commonly adopted to speed up FL aggregation at scale. We further introduce the locality-aware placement in LIFL to maximize the benefits of shared memory processing. LIFL precisely scales and carefully reuses the resources for hierarchical aggregation to achieve the highest degree of parallelism, while minimizing aggregation time and resource consumption. Our preliminary experimental results show that LIFL achieves significant improvement in resource efficiency and aggregation speed for supporting FL at scale, compared to existing serverful and serverless FL systems.more » « lessFree, publicly-accessible full text available May 13, 2025
-
Gibbons, P; Pekhimenko, G; De_Sa, C (Ed.)Federated Learning (FL) typically involves a large-scale, distributed system with individual user devices/servers training models locally and then aggregating their model updates on a trusted central server. Existing systems for FL often use an always-on server for model aggregation, which can be inefficient in terms of resource utilization. They also may be inelastic in their resource management. This is particularly exacerbated when aggregating model updates at scale in a highly dynamic environment with varying numbers of heterogeneous user devices/servers. We present LIFL, a lightweight and elastic serverless cloud platform with fine-grained resource management for efficient FL aggregation at scale. LIFL is enhanced by a streamlined, event-driven serverless design that eliminates the individual, heavyweight message broker and replaces inefficient container-based sidecars with lightweight eBPF-based proxies. We leverage shared memory processing to achieve high-performance communication for hierarchical aggregation, which is commonly adopted to speed up FL aggregation at scale. We further introduce the locality-aware placement in LIFL to maximize the benefits of shared memory processing. LIFL precisely scales and carefully reuses the resources for hierarchical aggregation to achieve the highest degree of parallelism, while minimizing aggregation time and resource consumption. Our preliminary experimental results show that LIFL achieves significant improvement in resource efficiency and aggregation speed for supporting FL at scale, compared to existing serverful and serverless FL systems.more » « lessFree, publicly-accessible full text available May 13, 2025
-
SPRIGHT: High-Performance eBPF-Based Event-Driven, Shared-Memory Processing for Serverless ComputingServerless computing promises an efficient, low-cost compute capability in cloud environments. However, existing solutions, epitomized by open-source platforms such as Knative, include heavyweight components that undermine this goal of serverless computing. Additionally, such serverless platforms lack dataplane optimizations to achieve efficient, high-performance function chains that facilitate the popular microservices development paradigm. Their use of unnecessarily complex and duplicate capabilities for building function chains severely degrades performance. ‘Cold-start’ latency is another deterrent. We describe SPRIGHT, a lightweight, high-performance, responsive serverless framework. SPRIGHT exploits shared memory processing and dramatically improves the scalability of the dataplane by avoiding unnecessary protocol processing and serialization-deserialization overheads. SPRIGHT extensively leverages event-driven processing with the extended Berkeley Packet Filter (eBPF). We creatively use eBPF’s socket message mechanism to support shared memory processing, with overheads being strictly load-proportional. Compared to constantly-running, polling-based DPDK, SPRIGHT achieves the same dataplane performance with 10× less CPU usage under realistic workloads. Additionally, eBPF benefits SPRIGHT, by replacing heavyweight serverless components, allowing us to keep functions ‘warm’ with negligible penalty. Our preliminary experimental results show that SPRIGHT achieves an order of magnitude improvement in throughput and latency compared to Knative, while substantially reducing CPU usage, and obviates the need for ‘cold-start’.more » « lessFree, publicly-accessible full text available June 1, 2025
-
Recent work has demonstrated how programmable switches can effectively detect attack traffic, such as denial-of- service attacks in the midst of high-volume network traffic. However, these techniques primarily rely on sampling- or sketch- based data structures that can only be used to approximate the characteristics of dominant flows in the network. As a result, such techniques are unable to effectively detect slow attacks such as SYN port scans, SSH brute forcing, or HTTP connection exploits, which do so by stealthily adding only a few packets to the network. In this work we explore how the combination of programmable switches, Smart network interface cards (sNICs), and hosts can enable fine-grained analysis of every flow in a cloud network, even those with only a small number of packets. We focus on analyzing packets at the start of each flow, as those packets often can help indicate whether a flow is benign or suspicious, e.g., by detecting an attack which fails to complete the TCP handshake in order to waste server connection resources. Our approach leverages the high-speed processing of a programmable switch while overcoming its primary limitation - very limited memory capacity - by judiciously sending some state for processing to the sNIC or the host which typically has more memory, but lower bandwidth. Achieving this requires careful design of data structures on the switch, such as a bloom filter and flow logs, and communication protocols between the switch, sNIC , and host, to coordinate state.more » « lessFree, publicly-accessible full text available June 27, 2025
-
Abstract—Recent work has demonstrated how programmable switches can effectively detect attack traffic, such as denial- of-service attacks in the midst of high-volume network traffic. However, these techniques primarily rely on sampling- or sketch- based data structures that can only be used to approximate the characteristics of dominant flows in the network. As a result, such techniques are unable to effectively detect slow attacks such as SYN port scans, SSH brute forcing, or HTTP connection exploits, which do so by stealthily adding only a few packets to the network. In this work we explore how the combination of programmable switches, Smart network interface cards (sNICs), and hosts can enable fine-grained analysis of every flow in a cloud network, even those with only a small number of packets. We focus on analyzing packets at the start of each flow, as those packets often can help indicate whether a flow is benign or suspicious, e.g., by detecting an attack which fails to complete the TCP handshake in order to waste server connection resources. Our approach leverages the high-speed processing of a programmable switch while overcoming its primary limitation – very limited memory capacity – by judiciously sending some state for processing to the sNIC or the host which typically has more memory, but lower bandwidth. Achieving this requires careful design of data structures on the switch, such as a bloom filter and flow logs, and communication protocols between the switch, sNIC, and host, to coordinate state.more » « lessFree, publicly-accessible full text available June 27, 2025