We prove that
Squeezed light has long been used to enhance the precision of a single optomechanical sensor. An emerging set of proposals seeks to use arrays of optomechanical sensors to detect weak distributed forces, for applications ranging from gravitybased subterranean imaging to dark matter searches; however, a detailed investigation into the quantumenhancement of this approach remains outstanding. Here, we propose an array of entanglementenhanced optomechanical sensors to improve the broadband sensitivity of distributed force sensing. By coherently operating the optomechanical sensor array and distributing squeezing to entangle the optical fields, the array of sensors has a scaling advantage over independent sensors (i.e.,
 NSFPAR ID:
 10452888
 Publisher / Repository:
 Nature Publishing Group
 Date Published:
 Journal Name:
 Communications Physics
 Volume:
 6
 Issue:
 1
 ISSN:
 23993650
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract depth local random quantum circuits with two qudit nearestneighbor gates on a$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$ $\phantom{\rule{0ex}{0ex}}\text{poly}\phantom{\rule{0ex}{0ex}}\left(t\right)\xb7{n}^{1/D}$D dimensional lattice withn qudits are approximatet designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was due to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$${{\,\textrm{poly}\,}}(t)\cdot n$$ $\phantom{\rule{0ex}{0ex}}\text{poly}\phantom{\rule{0ex}{0ex}}\left(t\right)\xb7n$ . We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($$D=1$$ $D=1$ ) is infinite and that certain counting problems are$${{\,\mathrm{\textsf{PH}}\,}}$$ $\phantom{\rule{0ex}{0ex}}\mathrm{PH}\phantom{\rule{0ex}{0ex}}$ hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constantdepth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anticoncentration”, meaning roughly that the output has nearmaximal entropy. Unitary 2designs have the desired anticoncentration property. Our result improves the required depth for this level of anticoncentration from linear depth to a sublinear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a twodimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$\#{\textsf{P}}$$ $\#P$ depth suffices for anticoncentration. The proof is based on a previous construction of$$O(\sqrt{n})$$ $O\left(\sqrt{n}\right)$t designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasiorthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anticoncentration and to establish equivalence between these various norms for lowdepth circuits. For random circuits with longrange gates, we use different methods to show that anticoncentration happens at circuit size corresponding to depth$$O(n\ln ^2 n)$$ $O\left(n{ln}^{2}n\right)$ . We also show a lower bound of$$O(\ln ^3 n)$$ $O\left({ln}^{3}n\right)$ for the size of such circuit in this case. We also prove that anticoncentration is possible in depth$$\Omega (n \ln n)$$ $\Omega (nlnn)$ (size$$O(\ln n \ln \ln n)$$ $O(lnnlnlnn)$ ) using a different model.$$O(n \ln n \ln \ln n)$$ $O(nlnnlnlnn)$ 
We present a performance analysis of compact monolithic optomechanical inertial sensors that describes their key fundamental limits and overall acceleration noise floor. Performance simulations for lowfrequency gravitysensitive inertial sensors show attainable acceleration noise floors on the order of
$1\times <\#comment/>{10}^{<\#comment/>11}\phantom{\rule{thickmathspace}{0ex}}\mathrm{m}/{\mathrm{s}}^{2}\sqrt{\mathrm{H}\mathrm{z}}$ . Furthermore, from our performance models, we devised an optimization approach for our sensor designs, sensitivity, and bandwidth trade space. We conducted characterization measurements of these compact mechanical resonators, demonstrating$\mathit{\text{mQ}}$ products at levels of 250 kg, which highlight their exquisite acceleration sensitivity. 
Abstract A distributed sensing protocol uses a network of local sensing nodes to estimate a global feature of the network, such as a weighted average of locally detectable parameters. In the noiseless case, continuousvariable (CV) multipartite entanglement shared by the nodes can improve the precision of parameter estimation relative to the precision attainable by a network without shared entanglement; for an entangled protocol, the root mean square estimation error scales like 1/
M with the numberM of sensing nodes, the socalled Heisenberg scaling, while for protocols without entanglement, the error scales like . However, in the presence of loss and other noise sources, although multipartite entanglement still has some advantages for sensing displacements and phases, the scaling of the precision with $1/\sqrt{M}$M is less favorable. In this paper, we show that using CV error correction codes can enhance the robustness of sensing protocols against imperfections and reinstate Heisenberg scaling up to moderate values ofM . Furthermore, while previous distributed sensing protocols could measure only a single quadrature, we construct a protocol in which both quadratures can be sensed simultaneously. Our work demonstrates the value of CV error correction codes in realistic sensing scenarios. 
Abstract A new type of interferometric fiber sensor based on a MachZehnder FabryPerot hybrid scheme has been experimentally demonstrated. The interferometer combines the benefits of both a doublepath configuration and an optical resonator, leading to recordhigh strain and phase resolutions limited only by the intrinsic thermal noise in optical fibers across a broad frequency range. Using only offtheshelf components, the sensor is able to achieve noiselimited strain resolutions of 40 f
/$$\varepsilon $$ $\epsilon $ at 10 Hz and 1 f$$\sqrt{(}Hz)$$ $\sqrt{(}Hz)$ /$$\varepsilon $$ $\epsilon $ at 100 kHz. With a proper scaleup, attostrain resolutions are believed to be within reach in the ultrasonic frequency range with such interferometers.$$\sqrt{(}Hz)$$ $\sqrt{(}Hz)$ 
Abstract Recently, the Hydrogen Epoch of Reionization Array (HERA) has produced the experiment’s first upper limits on the power spectrum of 21 cm fluctuations at
z ∼ 8 and 10. Here, we use several independent theoretical models to infer constraints on the intergalactic medium (IGM) and galaxies during the epoch of reionization from these limits. We find that the IGM must have been heated above the adiabaticcooling threshold byz ∼ 8, independent of uncertainties about IGM ionization and the radio background. Combining HERA limits with complementary observations constrains the spin temperature of thez ∼ 8 neutral IGM to 27 K 630 K (2.3 K $\u3008{\overline{T}}_{S}\u3009$ 640 K) at 68% (95%) confidence. They therefore also place a lower bound on Xray heating, a previously unconstrained aspects of early galaxies. For example, if the cosmic microwave background dominates the $\u3008{\overline{T}}_{S}\u3009$z ∼ 8 radio background, the new HERA limits imply that the first galaxies produced Xrays more efficiently than local ones. Thez ∼ 10 limits require even earlier heating if darkmatter interactions cool the hydrogen gas. If an extra radio background is produced by galaxies, we rule out (at 95% confidence) the combination of high radio and low Xray luminosities ofL _{r,ν}/SFR > 4 × 10^{24}W Hz^{−1} yr and ${M}_{\odot}^{1}$L _{X}/SFR < 7.6 × 10^{39}erg s^{−1} yr. The new HERA upper limits neither support nor disfavor a cosmological interpretation of the recent Experiment to Detect the Global EOR Signature (EDGES) measurement. The framework described here provides a foundation for the interpretation of future HERA results. ${M}_{\odot}^{1}$