Squeezed light has long been used to enhance the precision of a single optomechanical sensor. An emerging set of proposals seeks to use arrays of optomechanical sensors to detect weak distributed forces, for applications ranging from gravity-based subterranean imaging to dark matter searches; however, a detailed investigation into the quantum-enhancement of this approach remains outstanding. Here, we propose an array of entanglement-enhanced optomechanical sensors to improve the broadband sensitivity of distributed force sensing. By coherently operating the optomechanical sensor array and distributing squeezing to entangle the optical fields, the array of sensors has a scaling advantage over independent sensors (i.e.,
A distributed sensing protocol uses a network of local sensing nodes to estimate a global feature of the network, such as a weighted average of locally detectable parameters. In the noiseless case, continuous-variable (CV) multipartite entanglement shared by the nodes can improve the precision of parameter estimation relative to the precision attainable by a network without shared entanglement; for an entangled protocol, the root mean square estimation error scales like 1/
- Award ID(s):
- 1936118
- NSF-PAR ID:
- 10360334
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- New Journal of Physics
- Volume:
- 22
- Issue:
- 2
- ISSN:
- 1367-2630
- Page Range / eLocation ID:
- Article No. 022001
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract , where$$\sqrt{M}\to M$$ M is the number of sensors) due to coherence as well as joint noise suppression due to multi-partite entanglement. As an illustration, we consider entanglement-enhancement of an optomechanical accelerometer array to search for dark matter, and elucidate the challenge of realizing a quantum advantage in this context. -
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3-SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ time).$$\tilde{O}(n^{1.5})$$ Counting the number of
k -cliques with total edge weight equal to zero in ann -node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ ). For odd$$k\ge 3$$ k , this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm -edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ k -cliques in unweighted graphs, and had worse running times for smallk .Computing the All-Pairs Shortest Distances matrix for an
n -node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ Certifying that an
n -variablek -CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2 - n/O(k)}$$ -time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ k -UNSAT running in time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ -
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
In a conventional atomic interferometer employing
atoms, the phase sensitivity is at the standard quantum limit: . Under usual spin squeezing, the sensitivity is increased by lowering the quantum noise. It is also possible to increase the sensitivity by leaving the quantum noise unchanged while producing phase amplification. Here we show how to increase the sensitivity, to the Heisenberg limit of , while increasing the quantum noise by and amplifying the phase by a factor of . Because of the enhancement of the quantum noise and the large phase magnification, the effect of excess noise is highly suppressed. The protocol uses a Schrödinger cat state representing a maximally entangled superposition of two collective states of atoms. The phase magnification occurs when we use either atomic state detection or collective state detection; however, the robustness against excess noise occurs only when atomic state detection is employed. We show that for one version of the protocol, the signal amplitude is when is even, and is vanishingly small when is odd, for both types of detection. We also show how the protocol can be modified to reverse the nature of the signal for odd versus even values of . Thus, for a situation where the probability of being even or odd is equal, the net sensitivity is within a factor of of the Heisenberg limit. Finally, we discuss potential experimental constraints for implementing this scheme via one-axis-twist squeezing employing the cavity feedback scheme, and show that the effects of cavity decay and spontaneous emission are highly suppressed because of the increased quantum noise and the large phase magnification inherent to the protocol. As a result, we find that the maximum improvement in sensitivity can be close to the ideal limit for as many as 10 million atoms. -
Abstract We describe the results of a new reverberation mapping program focused on the nearby Seyfert galaxy NGC 3227. Photometric and spectroscopic monitoring was carried out from 2022 December to 2023 June with the Las Cumbres Observatory network of telescopes. We detected time delays in several optical broad emission lines, with H
β having the longest delay at days and Heii having the shortest delay with days. We also detect velocity-resolved behavior of the Hβ emission line, with different line-of-sight velocities corresponding to different observed time delays. Combining the integrated Hβ time delay with the width of the variable component of the emission line and a standard scale factor suggests a black hole mass ofM ⊙. Modeling of the full velocity-resolved response of the Hβ emission line with the phenomenological codeCARAMEL finds a similar mass ofM ⊙and suggests that the Hβ -emitting broad-line region (BLR) may be represented by a biconical or flared disk structure that we are viewing at an inclination angle ofθ i ≈ 33° and with gas motions that are dominated by rotation. The new photoionization-based BLR modeling toolBELMAC finds general agreement with the observations when assuming the best-fitCARAMEL results; however,BELMAC prefers a thick-disk geometry and kinematics that are equally composed of rotation and inflow. Both codes infer a radially extended and flattened BLR that is not outflowing.