This paper studies the “age of information” (AoI) in a multi-source status update system where N active sources each send updates of their time-varying process to a monitor through a server with packet delivery errors. We analyze the average AoI for stationary randomized and round-robin scheduling policies. For both of these scheduling policies, we further analyze the effect of packet retransmission policies, i.e., retransmission without re- sampling, retransmission with resampling, or no retransmission, when errors occur. Expressions for the average AoI are derived for each case. It is shown that the round-robin schedule policy in conjunction with retransmission with resampling when errors occur achieves the lowest average AoI among the considered cases. For stationary randomized schedules with equiprobable source selection, it is further shown that the average AoI gap to round-robin schedules with the same packet management policy scales as O(N). Finally, for stationary randomized policies, the optimal source selection probabilities that minimize a weighted sum average AoI metric are derived.
more »
« less
This content will become publicly available on May 27, 2026
Design and Modeling of a New File Transfer Architecture to Reduce Undetected Errors Evaluated in the FABRIC Testbed
Ensuring the integrity of petabyte-scale file transfers is essential for the data gathered from scientific instruments. As packet sizes increase, so does the likelihood of errors, resulting in a higher probability of undetected errors in the packet. This paper presents a Multi-Level Error Detection (MLED) framework that leverages in-network resources to reduce undetected error probability (UEP) in file transmission. MLED is based on a configurable recursive architecture that organizes communication in layers at different levels, decoupling network functions such as error detection, routing, addressing, and security. Each layer Lij at level i implements a policy Pij that governs its operation, including the error detection mechanism used, specific to the scope of that layer. MLED can be configured to mimic the error detection mechanisms of existing large-scale file transfer protocols. The recursive structure of MLED is analyzed and it shows that adding additional levels of error detection reduces the overall UEP. An adversarial error model is designed to introduce errors into files that evade detection by multiple error detection policies. Through experimentation using the FABRIC testbed the traditional approach, with transport- and data link- layer error detection, results in a corrupt file transfer requiring retransmission of the entire file. Using its recursive structure, an implementation of MLED detects and corrects these adversarial errors at intermediate levels inside the network, avoiding file retransmission under non-zero error rates. MLED therefore achieves a 100% gain in goodput over the traditional approach, reaching a goodput of over 800 Mbps on a single connection with no appreciable increase in delay.
more »
« less
- PAR ID:
- 10630774
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Measurement and Analysis of Computing Systems
- Volume:
- 9
- Issue:
- 2
- ISSN:
- 2476-1249
- Page Range / eLocation ID:
- 1 to 42
- Subject(s) / Keyword(s):
- Large-scale Data Transfer High-speed Wide Area Network Multi-Level Error Detection Recursive Architecture Error Detection Adversarial Error Model CRC Internet checksum
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Inspired by prior work suggesting undetected errors were becoming a problem on the Internet, we set out to create a measurement system to detect errors that the TCP checksum missed. We designed a client-server framework in which the servers sent known files to clients. We then compared the received data with the original file to identify undetected errors introduced by the network. We deployed this measurement framework on various public testbeds. Over the course of 9 months, we transferred a total of 26 petabytes of data. Scaling the measurement framework to capture a large number of errors proved to be a challenge. This paper focuses on the challenges encountered during the deployment of the measurement system. We also present the interim results, which suggest that the error problems seen in prior works may be caused by two distinct processes: (1) errors that slip past TCP and (2) file system failures. The interim results also suggest that the measurement system needs to be adjusted to collect exabytes of measurement data, rather than the petabytes that prior studies predicted.more » « less
-
Underwater backscatter is a promising technology for ultra-lowpower underwater networking, but existing systems break down in mobile scenarios. This paper presents EchoRider, the first system to enable reliable underwater backscatter networking under mobility. EchoRider introduces three key components. First, it incorporates a robust and energy-efficient downlink architecture that uses chirp-modulated transmissions at the reader and a sub-Nyquist chirp decoder on backscatter nodes—bringing the resilience of LoRa-style signaling to underwater backscatter while remaining ultra-lowpower. Second, it introduces a NACK-based full-duplex retransmission protocol, enabling efficient, reliable packet delivery. Third, it implements a Doppler-resilient uplink decoding pipeline that includes adaptive equalization, polar coding, and dynamic retraining to combat channel variation. We built a full EchoRider prototype and evaluated it across over 1,200 real-world mobile experiments. EchoRider improves bit error rate by over 125× compared to a state-of-the-art baseline and maintains underwater goodput of 0.8 kbps at speeds up to 2.91 knots. In contrast, the baseline fails at speeds as low as 0.17 knots. Finally, we demonstrate EchoRider in end-to-end deployments involving mobile drones and sensor nodes, showing its effectiveness in practical underwater networked applications.more » « less
-
A cursory look at the Internet protocol stack shows error checking capability almost at every layer, and yet, a slowly growing set of results show that a surprising fraction of big data transfers over TCP/IP are failing. As we have dug into this problem, we have come to realize that nobody is paying much attention to the causes of transmission errors in the Internet. Rather, they have typically resorted to file-level retransmissions. Given the exponential growth in data sizes, this approach is not sustainable. Furthermore, while there has been considerable progress in understanding error codes and how to choose or create error codes that offer sturdy error protection, the Internet has not made use of this new science. We propose a set of new ideas that look at paths forward to reduce error rates and better protect big data. We also propose a new file transfer protocol that efficiently handles errors and minimizes retransmissions.more » « less
-
A cursory look at the Internet protocol stack shows error checking capability almost at every layer, and yet, a slowly growing set of results show that a surprising fraction of big data transfers over TCP/IP are failing. As we have dug into this problem, we have come to realize that nobody is paying much attention to the causes of transmission errors in the Internet. Rather, they have typically resorted to file-level retransmissions. Given the exponential growth in data sizes, this approach is not sustainable. Furthermore, while there has been considerable progress in understanding error codes and how to choose or create error codes that offer sturdy error protection, the Internet has not made use of this new science. We propose a set of new ideas that look at paths forward to reduce error rates and better protect big data. We also propose a new file transfer protocol that efficiently handles errors and minimizes retransmissions.more » « less
An official website of the United States government
