skip to main content


Title: Noise-resistant quantum communications using hyperentanglement

Quantum information protocols are being deployed in increasingly practical scenarios, via optical fibers or free space, alongside classical communications channels. However, entanglement, the most critical resource to deploy to the communicating parties, is also the most fragile to the noise-induced degradations. Here we show that polarization-frequency hyperentanglement of photons can be effectively employed to enable noise-resistant distribution of polarization entanglement through noisy quantum channels. In particular, we demonstrate that our hyperentanglement-based scheme results in an orders-of-magnitude increase in the SNR for distribution of polarization-entangled qubit pairs, enabling quantum communications even in the presence of strong noise that would otherwise preclude quantum operations due to noise-induced entanglement sudden death. While recent years have witnessed tremendous interest and progress in long-distance quantum communications, previous attempts to deal with the noise have mostly been focused on passive noise suppression in quantum channels. Here, via the use of hyperentangled degrees of freedom, we pave the way toward a universally adoptable strategy to enable entanglement-based quantum communications via strongly noisy quantum channels.

 
more » « less
NSF-PAR ID:
10304415
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optica
Volume:
8
Issue:
12
ISSN:
2334-2536
Page Range / eLocation ID:
Article No. 1524
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Qudit entanglement is an indispensable resource for quantum information processing since increasing dimensionality provides a pathway to higher capacity and increased noise resilience in quantum communications, and cluster-state quantum computations. In continuous-variable time–frequency entanglement, encoding multiple qubits per photon is only limited by the frequency correlation bandwidth and detection timing jitter. Here, we focus on the discrete-variable time–frequency entanglement in a biphoton frequency comb (BFC), generating by filtering the signal and idler outputs with a fiber Fabry–Pérot cavity with 45.32 GHz free-spectral range (FSR) and 1.56 GHz full-width-at-half-maximum (FWHM) from a continuous-wave (cw)-pumped type-II spontaneous parametric downconverter (SPDC). We generate a BFC whose time-binned/frequency-binned Hilbert space dimensionality is at least 324, based on the assumption of a pure state. Such BFC’s dimensionality doubles up to 648, after combining with its post-selected polarization entanglement, indicating a potential 6.28 bits/photon classical-information capacity. The BFC exhibits recurring Hong–Ou–Mandel (HOM) dips over 61 time bins with a maximum visibility of 98.4% without correction for accidental coincidences. In a post-selected measurement, it violates the Clauser–Horne–Shimony–Holt (CHSH) inequality for polarization entanglement by up to 18.5 standard deviations with anS-parameter of up to 2.771. It has Franson interference recurrences in 16 time bins with a maximum visibility of 96.1% without correction for accidental coincidences. From the zeroth- to the third-order Franson interference, we infer an entanglement of formation (Eof) up to 1.89 ± 0.03 ebits—where 2 ebits is the maximal entanglement for a 4 × 4 dimensional biphoton—as a lower bound on the 61 time-bin BFC’s high-dimensional entanglement. To further characterize time-binned/frequency-binned BFCs we obtain Schmidt mode decompositions of BFCs generated using cavities with 45.32, 15.15, and 5.03 GHz FSRs. These decompositions confirm the time–frequency scaling from Fourier-transform duality. Moreover, we present the theory of conjugate Franson interferometry—because it is characterized by the state’s joint-temporal intensity (JTI)—which can further help to distinguish between pure-state BFC and mixed state entangled frequency pairs, although the experimental implementation is challenging and not yet available. In summary, our BFC serves as a platform for high-dimensional quantum information processing and high-dimensional quantum key distribution (QKD).

     
    more » « less
  2. Abstract

    Quantum key distribution (QKD) has established itself as a groundbreaking technology, showcasing inherent security features that are fundamentally proven. Qubit-based QKD protocols that rely on binary encoding encounter an inherent constraint related to the secret key capacity. This limitation restricts the maximum secret key capacity to one bit per photon. On the other hand, qudit-based QKD protocols have their advantages in scenarios where photons are scarce and noise is present, as they enable the transmission of more than one secret bit per photon. While proof-of-principle entangled-based qudit QKD systems have been successfully demonstrated over the years, the current limitation lies in the maximum distribution distance, which remains at 20 km fiber distance. Moreover, in these entangled high-dimensional QKD systems, the witness and distribution of quantum steering have not been shown before. Here we present a high-dimensional time-bin QKD protocol based on energy-time entanglement that generates a secure finite-length key capacity of 2.39 bit/coincidences and secure cryptographic finite-length keys at 0.24 Mbits s−1in a 50 km optical fiber link. Our system is built entirely using readily available commercial off-the-shelf components, and secured by nonlocal dispersion cancellation technique against collective Gaussian attacks. Furthermore, we set new records for witnessing both energy-time entanglement and quantum steering over different fiber distances. When operating with a quantum channel loss of 39 dB, our system retains its inherent characteristic of utilizing large-alphabet. This enables us to achieve a secure key rate of 0.30 kbits s−1and a secure key capacity of 1.10 bit/coincidences, considering finite-key effects. Our experimental results closely match the theoretical upper bound limit of secure cryptographic keys in high-dimensional time-bin QKD protocols (Moweret al2013Phys. Rev.A87062322; Zhanget al2014Phys. Rev. Lett.112120506), and outperform recent state-of-the-art qubit-based QKD protocols in terms of secure key throughput using commercial single-photon detectors (Wengerowskyet al2019Proc. Natl Acad. Sci.1166684; Wengerowskyet al2020npj Quantum Inf.65; Zhanget al2014Phys. Rev. Lett.112120506; Zhanget al2019Nat. Photon.13839; Liuet al2019Phys. Rev. Lett.122160501; Zhanget al2020Phys. Rev. Lett.125010502; Weiet al2020Phys. Rev.X10031030). The simple and robust entanglement-based high-dimensional time-bin protocol presented here provides potential for practical long-distance quantum steering and QKD with multiple secure bits-per-coincidence, and higher secure cryptographic keys compared to mature qubit-based QKD protocols.

     
    more » « less
  3. Understanding the computational power of noisy intermediate-scale quantum (NISQ) devices is of both fundamental and practical importance to quantum information science. Here, we address the question of whether error-uncorrected noisy quantum computers can provide computational advantage over classical computers. Specifically, we study noisy random circuit sampling in one dimension (or 1D noisy RCS) as a simple model for exploring the effects of noise on the computational power of a noisy quantum device. In particular, we simulate the real-time dynamics of 1D noisy random quantum circuits via matrix product operators (MPOs) and characterize the computational power of the 1D noisy quantum system by using a metric we call MPO entanglement entropy. The latter metric is chosen because it determines the cost of classical MPO simulation. We numerically demonstrate that for the two-qubit gate error rates we considered, there exists a characteristic system size above which adding more qubits does not bring about an exponential growth of the cost of classical MPO simulation of 1D noisy systems. Specifically, we show that above the characteristic system size, there is an optimal circuit depth, independent of the system size, where the MPO entanglement entropy is maximized. Most importantly, the maximum achievable MPO entanglement entropy is bounded by a constant that depends only on the gate error rate, not on the system size. We also provide a heuristic analysis to get the scaling of the maximum achievable MPO entanglement entropy as a function of the gate error rate. The obtained scaling suggests that although the cost of MPO simulation does not increase exponentially in the system size above a certain characteristic system size, it does increase exponentially as the gate error rate decreases, possibly making classical simulation practically not feasible even with state-of-the-art supercomputers. 
    more » « less
  4. Time-frequency (TF) filtering of analog signals has played a crucial role in the development of radio-frequency communications and is currently being recognized as an essential capability for communications, both classical and quantum, in the optical frequency domain. How best to design optical time-frequency (TF) filters to pass a targeted temporal mode (TM), and to reject background (noise) photons in the TF detection window? The solution for ‘coherent’ TF filtering is known—the quantum pulse gate—whereas the conventional, more common method is implemented by a sequence of incoherent spectral filtering and temporal gating operations. To compare these two methods, we derive a general formalism for two-stage incoherent time-frequency filtering, finding expressions for signal pulse transmission efficiency, and for the ability to discriminate TMs, which allows the blocking of unwanted background light. We derive the tradeoff between efficiency and TM discrimination ability, and find a remarkably concise relation between these two quantities and the time-bandwidth product of the combined filters. We apply the formalism to two examples—rectangular filters or Gaussian filters—both of which have known orthogonal-function decompositions. The formalism can be applied to any state of light occupying the input temporal mode, e.g., ‘classical’ coherent-state signals or pulsed single-photon states of light. In contrast to the radio-frequency domain, where coherent detection is standard and one can use coherent matched filtering to reject noise, in the optical domain direct detection is optimal in a number of scenarios where the signal flux is extremely small. Our analysis shows how the insertion loss and SNR change when one uses incoherent optical filters to reject background noise, followed by direct detection, e.g. photon counting. We point out implications in classical and quantum optical communications. As an example, we study quantum key distribution, wherein strong rejection of background noise is necessary to maintain a high quality of entanglement, while high signal transmission is needed to ensure a useful key generation rate.

     
    more » « less
  5. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less