skip to main content


Title: Entanglement-enhanced optomechanical sensor array with application to dark matter searches
Abstract

Squeezed light has long been used to enhance the precision of a single optomechanical sensor. An emerging set of proposals seeks to use arrays of optomechanical sensors to detect weak distributed forces, for applications ranging from gravity-based subterranean imaging to dark matter searches; however, a detailed investigation into the quantum-enhancement of this approach remains outstanding. Here, we propose an array of entanglement-enhanced optomechanical sensors to improve the broadband sensitivity of distributed force sensing. By coherently operating the optomechanical sensor array and distributing squeezing to entangle the optical fields, the array of sensors has a scaling advantage over independent sensors (i.e.,$$\sqrt{M}\to M$$MM, whereMis the number of sensors) due to coherence as well as joint noise suppression due to multi-partite entanglement. As an illustration, we consider entanglement-enhancement of an optomechanical accelerometer array to search for dark matter, and elucidate the challenge of realizing a quantum advantage in this context.

 
more » « less
Award ID(s):
2330310 2209473 2142882 2240641 1945832
NSF-PAR ID:
10452888
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Communications Physics
Volume:
6
Issue:
1
ISSN:
2399-3650
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    This paper presents a search for dark matter,$$\chi $$χ, using events with a single top quark and an energeticWboson. The analysis is based on proton–proton collision data collected with the ATLAS experiment at$$\sqrt{s}=$$s=13 TeV during LHC Run 2 (2015–2018), corresponding to an integrated luminosity of 139 fb$$^{-1}$$-1. The search considers final states with zero or one charged lepton (electron or muon), at least oneb-jet and large missing transverse momentum. In addition, a result from a previous search considering two-charged-lepton final states is included in the interpretation of the results. The data are found to be in good agreement with the Standard Model predictions and the results are interpreted in terms of 95% confidence-level exclusion limits in the context of a class of dark matter models involving an extended two-Higgs-doublet sector together with a pseudoscalar mediator particle. The search is particularly sensitive to on-shell production of the charged Higgs boson state,$$H^{\pm }$$H±, arising from the two-Higgs-doublet mixing, and its semi-invisible decays via the mediator particle,a:$$H^{\pm } \rightarrow W^\pm a (\rightarrow \chi \chi )$$H±W±a(χχ). Signal models with$$H^{\pm }$$H±masses up to 1.5 TeV andamasses up to 350 GeV are excluded assuming a$$\tan \beta $$tanβvalue of 1. For masses ofaof 150 (250) GeV,$$\tan \beta $$tanβvalues up to 2 are excluded for$$H^{\pm }$$H±masses between 200 (400) GeV and 1.5 TeV. Signals with$$\tan \beta $$tanβvalues between 20 and 30 are excluded for$$H^{\pm }$$H±masses between 500 and 800 GeV.

     
    more » « less
  2. Abstract

    We prove that$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$poly(t)·n1/D-depth local random quantum circuits with two qudit nearest-neighbor gates on aD-dimensional lattice withnqudits are approximatet-designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was$${{\,\textrm{poly}\,}}(t)\cdot n$$poly(t)·ndue to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$$D=1$$D=1. We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($${{\,\mathrm{\textsf{PH}}\,}}$$PH) is infinite and that certain counting problems are$$\#{\textsf{P}}$$#P-hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$O(\sqrt{n})$$O(n)depth suffices for anti-concentration. The proof is based on a previous construction oft-designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size$$O(n\ln ^2 n)$$O(nln2n)corresponding to depth$$O(\ln ^3 n)$$O(ln3n). We also show a lower bound of$$\Omega (n \ln n)$$Ω(nlnn)for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$O(\ln n \ln \ln n)$$O(lnnlnlnn)(size$$O(n \ln n \ln \ln n)$$O(nlnnlnlnn)) using a different model.

     
    more » « less
  3. We present a performance analysis of compact monolithic optomechanical inertial sensors that describes their key fundamental limits and overall acceleration noise floor. Performance simulations for low-frequency gravity-sensitive inertial sensors show attainable acceleration noise floors on the order of1×<#comment/>10−<#comment/>11m/s2Hz. Furthermore, from our performance models, we devised an optimization approach for our sensor designs, sensitivity, and bandwidth trade space. We conducted characterization measurements of these compact mechanical resonators, demonstratingmQ-products at levels of 250 kg, which highlight their exquisite acceleration sensitivity.

     
    more » « less
  4. Abstract

    A distributed sensing protocol uses a network of local sensing nodes to estimate a global feature of the network, such as a weighted average of locally detectable parameters. In the noiseless case, continuous-variable (CV) multipartite entanglement shared by the nodes can improve the precision of parameter estimation relative to the precision attainable by a network without shared entanglement; for an entangled protocol, the root mean square estimation error scales like 1/Mwith the numberMof sensing nodes, the so-called Heisenberg scaling, while for protocols without entanglement, the error scales like1/M. However, in the presence of loss and other noise sources, although multipartite entanglement still has some advantages for sensing displacements and phases, the scaling of the precision withMis less favorable. In this paper, we show that using CV error correction codes can enhance the robustness of sensing protocols against imperfections and reinstate Heisenberg scaling up to moderate values ofM. Furthermore, while previous distributed sensing protocols could measure only a single quadrature, we construct a protocol in which both quadratures can be sensed simultaneously. Our work demonstrates the value of CV error correction codes in realistic sensing scenarios.

     
    more » « less
  5. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less