skip to main content


Title: Distributed quantum sensing enhanced by continuous-variable error correction
Abstract

A distributed sensing protocol uses a network of local sensing nodes to estimate a global feature of the network, such as a weighted average of locally detectable parameters. In the noiseless case, continuous-variable (CV) multipartite entanglement shared by the nodes can improve the precision of parameter estimation relative to the precision attainable by a network without shared entanglement; for an entangled protocol, the root mean square estimation error scales like 1/Mwith the numberMof sensing nodes, the so-called Heisenberg scaling, while for protocols without entanglement, the error scales like1/M. However, in the presence of loss and other noise sources, although multipartite entanglement still has some advantages for sensing displacements and phases, the scaling of the precision withMis less favorable. In this paper, we show that using CV error correction codes can enhance the robustness of sensing protocols against imperfections and reinstate Heisenberg scaling up to moderate values ofM. Furthermore, while previous distributed sensing protocols could measure only a single quadrature, we construct a protocol in which both quadratures can be sensed simultaneously. Our work demonstrates the value of CV error correction codes in realistic sensing scenarios.

 
more » « less
Award ID(s):
1936118
NSF-PAR ID:
10360334
Author(s) / Creator(s):
; ;
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
New Journal of Physics
Volume:
22
Issue:
2
ISSN:
1367-2630
Page Range / eLocation ID:
Article No. 022001
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Squeezed light has long been used to enhance the precision of a single optomechanical sensor. An emerging set of proposals seeks to use arrays of optomechanical sensors to detect weak distributed forces, for applications ranging from gravity-based subterranean imaging to dark matter searches; however, a detailed investigation into the quantum-enhancement of this approach remains outstanding. Here, we propose an array of entanglement-enhanced optomechanical sensors to improve the broadband sensitivity of distributed force sensing. By coherently operating the optomechanical sensor array and distributing squeezing to entangle the optical fields, the array of sensors has a scaling advantage over independent sensors (i.e.,$$\sqrt{M}\to M$$MM, whereMis the number of sensors) due to coherence as well as joint noise suppression due to multi-partite entanglement. As an illustration, we consider entanglement-enhancement of an optomechanical accelerometer array to search for dark matter, and elucidate the challenge of realizing a quantum advantage in this context.

     
    more » « less
  2. Abstract

    In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:

    Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$O~(n). Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$O~(n1.5)and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$O~(n1.5)time).

    Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$O~(nk/2)(where$$k\ge 3$$k3). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$O~(m). Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.

    Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$O~(n2). Note this is optimal, as the matrix can have$$\Omega (n^2)$$Ω(n2)nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$O~(n2.94)nondeterministic time algorithm.

    Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$2n/2-n/O(k). We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$2n/2·poly(n)-time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$#SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$2n/2/nω(1)time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.

    Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$24n/5·poly(n). Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$22n/3·poly(n)time.

    Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$2n/3·poly(n), improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$20.49991n·poly(n)time.

     
    more » « less
  3. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  4. In a conventional atomic interferometer employingNatoms, the phase sensitivity is at the standard quantum limit:1/N. Under usual spin squeezing, the sensitivity is increased by lowering the quantum noise. It is also possible to increase the sensitivity by leaving the quantum noise unchanged while producing phase amplification. Here we show how to increase the sensitivity, to the Heisenberg limit of1/N, while increasing the quantum noise byNand amplifying the phase by a factor ofN. Because of the enhancement of the quantum noise and the large phase magnification, the effect of excess noise is highly suppressed. The protocol uses a Schrödinger cat state representing a maximally entangled superposition of two collective states ofNatoms. The phase magnification occurs when we use either atomic state detection or collective state detection; however, the robustness against excess noise occurs only when atomic state detection is employed. We show that for one version of the protocol, the signal amplitude isNwhenNis even, and is vanishingly small whenNis odd, for both types of detection. We also show how the protocol can be modified to reverse the nature of the signal for odd versus even values ofN. Thus, for a situation where the probability ofNbeing even or odd is equal, the net sensitivity is within a factor of2of the Heisenberg limit. Finally, we discuss potential experimental constraints for implementing this scheme via one-axis-twist squeezing employing the cavity feedback scheme, and show that the effects of cavity decay and spontaneous emission are highly suppressed because of the increased quantum noise and the large phase magnification inherent to the protocol. As a result, we find that the maximum improvement in sensitivity can be close to the ideal limit for as many as 10 million atoms.

     
    more » « less
  5. Abstract

    We describe the results of a new reverberation mapping program focused on the nearby Seyfert galaxy NGC 3227. Photometric and spectroscopic monitoring was carried out from 2022 December to 2023 June with the Las Cumbres Observatory network of telescopes. We detected time delays in several optical broad emission lines, with Hβhaving the longest delay atτcent=4.00.9+0.9days and Heiihaving the shortest delay withτcent=0.90.8+1.1days. We also detect velocity-resolved behavior of the Hβemission line, with different line-of-sight velocities corresponding to different observed time delays. Combining the integrated Hβtime delay with the width of the variable component of the emission line and a standard scale factor suggests a black hole mass ofMBH=1.10.3+0.2×107M. Modeling of the full velocity-resolved response of the Hβemission line with the phenomenological codeCARAMELfinds a similar mass ofMBH=1.20.7+1.5×107Mand suggests that the Hβ-emitting broad-line region (BLR) may be represented by a biconical or flared disk structure that we are viewing at an inclination angle ofθi≈ 33° and with gas motions that are dominated by rotation. The new photoionization-based BLR modeling toolBELMACfinds general agreement with the observations when assuming the best-fitCARAMELresults; however,BELMACprefers a thick-disk geometry and kinematics that are equally composed of rotation and inflow. Both codes infer a radially extended and flattened BLR that is not outflowing.

     
    more » « less