A cursory look at the Internet protocol stack shows error checking capability almost at every layer, and yet, a slowly growing set of results show that a surprising fraction of big data transfers over TCP/IP are failing. As we have dug into this problem, we have come to realize that nobody is paying much attention to the causes of transmission errors in the Internet. Rather, they have typically resorted to file-level retransmissions. Given the exponential growth in data sizes, this approach is not sustainable. Furthermore, while there has been considerable progress in understanding error codes and how to choose or create error codes that offer sturdy error protection, the Internet has not made use of this new science. We propose a set of new ideas that look at paths forward to reduce error rates and better protect big data. We also propose a new file transfer protocol that efficiently handles errors and minimizes retransmissions.
more »
« less
Big Data, Transmission Errors, and the Internet
A cursory look at the Internet protocol stack shows error checking capability almost at every layer, and yet, a slowly growing set of results show that a surprising fraction of big data transfers over TCP/IP are failing. As we have dug into this problem, we have come to realize that nobody is paying much attention to the causes of transmission errors in the Internet. Rather, they have typically resorted to file-level retransmissions. Given the exponential growth in data sizes, this approach is not sustainable. Furthermore, while there has been considerable progress in understanding error codes and how to choose or create error codes that offer sturdy error protection, the Internet has not made use of this new science. We propose a set of new ideas that look at paths forward to reduce error rates and better protect big data. We also propose a new file transfer protocol that efficiently handles errors and minimizes retransmissions.
more »
« less
- Award ID(s):
- 2019163
- PAR ID:
- 10440982
- Date Published:
- Journal Name:
- 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-S)
- Page Range / eLocation ID:
- 142 to 145
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Inspired by prior work suggesting undetected errors were becoming a problem on the Internet, we set out to create a measurement system to detect errors that the TCP checksum missed. We designed a client-server framework in which the servers sent known files to clients. We then compared the received data with the original file to identify undetected errors introduced by the network. We deployed this measurement framework on various public testbeds. Over the course of 9 months, we transferred a total of 26 petabytes of data. Scaling the measurement framework to capture a large number of errors proved to be a challenge. This paper focuses on the challenges encountered during the deployment of the measurement system. We also present the interim results, which suggest that the error problems seen in prior works may be caused by two distinct processes: (1) errors that slip past TCP and (2) file system failures. The interim results also suggest that the measurement system needs to be adjusted to collect exabytes of measurement data, rather than the petabytes that prior studies predicted.more » « less
-
Entanglement is essential for quantum information processing, but is limited by noise. We address this by developing high-yield entanglement distillation protocols with several advancements. (1) We extend the 2-to-1 recurrence entanglement distillation protocol to higher-rate n-to-(n−1) protocols that can correct any single-qubit errors. These protocols are evaluated through numerical simulations focusing on fidelity and yield. We also outline a method to adapt any classical error-correcting code for entanglement distillation, where the code can correct both bit-flip and phase-flip errors by incorporating Hadamard gates. (2) We propose a constant-depth decoder for stabilizer codes that transforms logical states into physical ones using single-qubit measurements. This decoder is applied to entanglement distillation protocols, reducing circuit depth and enabling protocols derived from high-performance quantum error-correcting codes. We demonstrate this by evaluating the circuit complexity for entanglement distillation protocols based on surface codes and quantum convolutional codes. (3) Our stabilizer entanglement distillation techniques advance quantum computing. We propose a fault-tolerant protocol for constant-depth encoding and decoding of arbitrary states in surface codes, with potential extensions to more general quantum low-density parity-check codes. This protocol is feasible with state-of-the-art reconfigurable atom arrays and surpasses the limits of conventional logarithmic depth encoders. Overall, our study integrates stabilizer formalism, measurement-based quantum computing, and entanglement distillation, advancing both quantum communication and computing.more » « less
-
Abstract Executing quantum algorithms on error-corrected logical qubits is a critical step for scalable quantum computing, but the requisite numbers of qubits and physical error rates are demanding for current experimental hardware. Recently, the development of error correcting codes tailored to particular physical noise models has helped relax these requirements. In this work, we propose a qubit encoding and gate protocol for171Yb neutral atom qubits that converts the dominant physical errors into erasures, that is, errors in known locations. The key idea is to encode qubits in a metastable electronic level, such that gate errors predominantly result in transitions to disjoint subspaces whose populations can be continuously monitored via fluorescence. We estimate that 98% of errors can be converted into erasures. We quantify the benefit of this approach via circuit-level simulations of the surface code, finding a threshold increase from 0.937% to 4.15%. We also observe a larger code distance near the threshold, leading to a faster decrease in the logical error rate for the same number of physical qubits, which is important for near-term implementations. Erasure conversion should benefit any error correcting code, and may also be applied to design new gates and encodings in other qubit platforms.more » « less
-
We construct a fault-tolerant quantum error-correcting protocol based on a qubit encoded in a large spin qudit using a spin-cat code, analogous to the continuous-variable cat encoding. With this, we can correct the dominant error sources, namely processes that can be expressed as error operators that are linear or quadratic in the components of angular momentum. Such codes tailored to dominant error sources can exhibit superior thresholds and lower resource overheads when compared to those designed for unstructured noise models. A key component is the gate that preserves the rank of spherical tensor operators. Categorizing the dominant errors as phase and amplitude errors, we demonstrate how phase errors, analogous to phase-flip errors for qubits, can be effectively corrected. Furthermore, we propose a measurement-free error-correction scheme to address amplitude errors without relying on syndrome measurements. Through an in-depth analysis of logical gate errors, we establish that the fault-tolerant threshold for error correction in the spin-cat encoding surpasses that of standard qubit-based encodings. We consider a specific implementation based on neutral-atom quantum computing, with qudits encoded in the nuclear spin of 87Sr, and show how to generate the universal gate set, including the rank-preserving gate, using quantum control and the Rydberg blockade. These findings pave the way for encoding a qubit in a large spin with the potential to achieve fault tolerance, high threshold, and reduced resource overhead in quantum information processing.more » « less
An official website of the United States government

