We prove that
Squeezed light has long been used to enhance the precision of a single optomechanical sensor. An emerging set of proposals seeks to use arrays of optomechanical sensors to detect weak distributed forces, for applications ranging from gravity-based subterranean imaging to dark matter searches; however, a detailed investigation into the quantum-enhancement of this approach remains outstanding. Here, we propose an array of entanglement-enhanced optomechanical sensors to improve the broadband sensitivity of distributed force sensing. By coherently operating the optomechanical sensor array and distributing squeezing to entangle the optical fields, the array of sensors has a scaling advantage over independent sensors (i.e.,
- NSF-PAR ID:
- 10452888
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Communications Physics
- Volume:
- 6
- Issue:
- 1
- ISSN:
- 2399-3650
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -depth local random quantum circuits with two qudit nearest-neighbor gates on a$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$ D -dimensional lattice withn qudits are approximatet -designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was due to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$${{\,\textrm{poly}\,}}(t)\cdot n$$ . We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($$D=1$$ ) is infinite and that certain counting problems are$${{\,\mathrm{\textsf{PH}}\,}}$$ -hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$\#{\textsf{P}}$$ depth suffices for anti-concentration. The proof is based on a previous construction of$$O(\sqrt{n})$$ t -designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size corresponding to depth$$O(n\ln ^2 n)$$ . We also show a lower bound of$$O(\ln ^3 n)$$ for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$\Omega (n \ln n)$$ (size$$O(\ln n \ln \ln n)$$ ) using a different model.$$O(n \ln n \ln \ln n)$$ -
We present a performance analysis of compact monolithic optomechanical inertial sensors that describes their key fundamental limits and overall acceleration noise floor. Performance simulations for low-frequency gravity-sensitive inertial sensors show attainable acceleration noise floors on the order of
. Furthermore, from our performance models, we devised an optimization approach for our sensor designs, sensitivity, and bandwidth trade space. We conducted characterization measurements of these compact mechanical resonators, demonstrating -products at levels of 250 kg, which highlight their exquisite acceleration sensitivity. -
Abstract A distributed sensing protocol uses a network of local sensing nodes to estimate a global feature of the network, such as a weighted average of locally detectable parameters. In the noiseless case, continuous-variable (CV) multipartite entanglement shared by the nodes can improve the precision of parameter estimation relative to the precision attainable by a network without shared entanglement; for an entangled protocol, the root mean square estimation error scales like 1/
M with the numberM of sensing nodes, the so-called Heisenberg scaling, while for protocols without entanglement, the error scales like . However, in the presence of loss and other noise sources, although multipartite entanglement still has some advantages for sensing displacements and phases, the scaling of the precision withM is less favorable. In this paper, we show that using CV error correction codes can enhance the robustness of sensing protocols against imperfections and reinstate Heisenberg scaling up to moderate values ofM . Furthermore, while previous distributed sensing protocols could measure only a single quadrature, we construct a protocol in which both quadratures can be sensed simultaneously. Our work demonstrates the value of CV error correction codes in realistic sensing scenarios. -
Abstract A new type of interferometric fiber sensor based on a Mach-Zehnder Fabry-Perot hybrid scheme has been experimentally demonstrated. The interferometer combines the benefits of both a double-path configuration and an optical resonator, leading to record-high strain and phase resolutions limited only by the intrinsic thermal noise in optical fibers across a broad frequency range. Using only off-the-shelf components, the sensor is able to achieve noise-limited strain resolutions of 40 f
/$$\varepsilon $$ at 10 Hz and 1 f$$\sqrt{(}Hz)$$ /$$\varepsilon $$ at 100 kHz. With a proper scale-up, atto-strain resolutions are believed to be within reach in the ultrasonic frequency range with such interferometers.$$\sqrt{(}Hz)$$ -
Abstract Recently, the Hydrogen Epoch of Reionization Array (HERA) has produced the experiment’s first upper limits on the power spectrum of 21 cm fluctuations at
z ∼ 8 and 10. Here, we use several independent theoretical models to infer constraints on the intergalactic medium (IGM) and galaxies during the epoch of reionization from these limits. We find that the IGM must have been heated above the adiabatic-cooling threshold byz ∼ 8, independent of uncertainties about IGM ionization and the radio background. Combining HERA limits with complementary observations constrains the spin temperature of thez ∼ 8 neutral IGM to 27 K 630 K (2.3 K 640 K) at 68% (95%) confidence. They therefore also place a lower bound on X-ray heating, a previously unconstrained aspects of early galaxies. For example, if the cosmic microwave background dominates thez ∼ 8 radio background, the new HERA limits imply that the first galaxies produced X-rays more efficiently than local ones. Thez ∼ 10 limits require even earlier heating if dark-matter interactions cool the hydrogen gas. If an extra radio background is produced by galaxies, we rule out (at 95% confidence) the combination of high radio and low X-ray luminosities ofL r ,ν /SFR > 4 × 1024W Hz−1 yr andL X /SFR < 7.6 × 1039erg s−1 yr. The new HERA upper limits neither support nor disfavor a cosmological interpretation of the recent Experiment to Detect the Global EOR Signature (EDGES) measurement. The framework described here provides a foundation for the interpretation of future HERA results.