skip to main content


Title: Gravitational causality and the self-stress of photons
A bstract We study causality in gravitational systems beyond the classical limit. Using on-shell methods, we consider the 1-loop corrections from charged particles to the photon energy-momentum tensor — the self-stress — that controls the quantum interaction between two on-shell photons and one off-shell graviton. The self-stress determines in turn the phase shift and time delay in the scattering of photons against a spectator particle of any spin in the eikonal regime. We show that the sign of the β -function associated to the running gauge coupling is related to the sign of time delay at small impact parameter. Our results show that, at first post-Minkowskian order, asymptotic causality, where the time delay experienced by any particle must be positive, is respected quantum mechanically. Contrasted with asymptotic causality, we explore a local notion of causality, where the time delay is longer than the one of gravitons, which is seemingly violated by quantum effects.  more » « less
Award ID(s):
2014071
NSF-PAR ID:
10346696
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Journal of High Energy Physics
Volume:
2022
Issue:
5
ISSN:
1029-8479
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We define the notion of one-shot signatures, which are signatures where any secret key can be used to sign only a single message, and then self-destructs. While such signatures are of course impossible classically, we construct one-shot signatures using quantum no-cloning. In particular, we show that such signatures exist relative to a classical oracle, which we can then heuristically obfuscate using known indistinguishability obfuscation schemes. We show that one-shot signatures have numerous applications for hybrid quantum/classical cryptographic tasks, where all communication is required to be classical, but local quantum operations are allowed. Applications include one-time signature tokens, quantum money with classical communication, decentralized blockchain-less cryptocurrency, signature schemes with unclonable secret keys, non-interactive certifiable min-entropy, and more. We thus position one-shot signatures as a powerful new building block for novel quantum cryptographic protocols. 
    more » « less
  2. null (Ed.)
    Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH( a ) and per-int-NSETH( b ) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F 2 or the permanent of an n × n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3-ave-SBSETH( a ′ ) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates. 
    more » « less
  3. null (Ed.)
    We use an online database of a turbulent channel-flow simulation at $Re_\tau =1000$ (Graham et al. J. Turbul. , vol. 17, issue 2, 2016, pp. 181–215) to determine the origin of vorticity in the near-wall buffer layer. Following an experimental study of Sheng et al. ( J. Fluid Mech. , vol. 633, 2009, pp.17–60), we identify typical ‘ejection’ and ‘sweep’ events in the buffer layer by local minima/maxima of the wall stress. In contrast to their conjecture, however, we find that vortex lifting from the wall is not a discrete event requiring $\sim$ 1 viscous time and $\sim$ 10 wall units, but is instead a distributed process over a space–time region at least $1\sim 2$ orders of magnitude larger in extent. To reach this conclusion, we exploit a rigorous mathematical theory of vorticity dynamics for Navier–Stokes solutions, in terms of stochastic Lagrangian flows and stochastic Cauchy invariants, conserved on average backward in time. This theory yields exact expressions for vorticity inside the flow domain in terms of vorticity at the wall, as transported by viscous diffusion and by nonlinear advection, stretching and rotation. We show that Lagrangian chaos observed in the buffer layer can be reconciled with saturated vorticity magnitude by ‘virtual reconnection’: although the Eulerian vorticity field in the viscous sublayer has a single sign of spanwise component, opposite signs of Lagrangian vorticity evolve by rotation and cancel by viscous destruction. Our analysis reveals many unifying features of classical fluids and quantum superfluids. We argue that ‘bundles’ of quantized vortices in superfluid turbulence will also exhibit stochastic Lagrangian dynamics and satisfy stochastic conservation laws resulting from particle relabelling symmetry. 
    more » « less
  4. Time-frequency (TF) filtering of analog signals has played a crucial role in the development of radio-frequency communications and is currently being recognized as an essential capability for communications, both classical and quantum, in the optical frequency domain. How best to design optical time-frequency (TF) filters to pass a targeted temporal mode (TM), and to reject background (noise) photons in the TF detection window? The solution for ‘coherent’ TF filtering is known—the quantum pulse gate—whereas the conventional, more common method is implemented by a sequence of incoherent spectral filtering and temporal gating operations. To compare these two methods, we derive a general formalism for two-stage incoherent time-frequency filtering, finding expressions for signal pulse transmission efficiency, and for the ability to discriminate TMs, which allows the blocking of unwanted background light. We derive the tradeoff between efficiency and TM discrimination ability, and find a remarkably concise relation between these two quantities and the time-bandwidth product of the combined filters. We apply the formalism to two examples—rectangular filters or Gaussian filters—both of which have known orthogonal-function decompositions. The formalism can be applied to any state of light occupying the input temporal mode, e.g., ‘classical’ coherent-state signals or pulsed single-photon states of light. In contrast to the radio-frequency domain, where coherent detection is standard and one can use coherent matched filtering to reject noise, in the optical domain direct detection is optimal in a number of scenarios where the signal flux is extremely small. Our analysis shows how the insertion loss and SNR change when one uses incoherent optical filters to reject background noise, followed by direct detection, e.g. photon counting. We point out implications in classical and quantum optical communications. As an example, we study quantum key distribution, wherein strong rejection of background noise is necessary to maintain a high quality of entanglement, while high signal transmission is needed to ensure a useful key generation rate.

     
    more » « less
  5. null (Ed.)
    A bstract Non-analyticity in co-moving momenta within the non-Gaussian bispectrum is a distinctive sign of on-shell particle production during inflation, presenting a unique opportunity for the “direct detection” of particles with masses as large as the inflationary Hubble scale ( H ). However, the strength of such non-analyticity ordinarily drops exponentially by a Boltzmann-like factor as masses exceed H . In this paper, we study an exception provided by a dimension-5 derivative coupling of the inflaton to heavy-particle currents, applying it specifically to the case of two real scalars. The operator has a “chemical potential” form, which harnesses the large kinetic energy scale of the inflaton, $$ {\overset{\cdot }{\phi}}_0^{1/2}\approx 60H $$ ϕ ⋅ 0 1 / 2 ≈ 60 H , to act as an efficient source of scalar particle production. Derivative couplings of inflaton ensure radiative stability of the slow-roll potential, which in turn maintains (approximate) scale-invariance of the inflationary correlations. We show that a signal not suffering Boltzmann suppression can be obtained in the bispectrum with strength f NL ∼ $$ \mathcal{O} $$ O (0 . 01–10) for an extended range of scalar masses $$ \lesssim {\overset{\cdot }{\phi}}_0^{1/2} $$ ≲ ϕ ⋅ 0 1 / 2 , potentially as high as 10 15 GeV, within the sensitivity of upcoming LSS and more futuristic 21-cm experiments. The mechanism does not invoke any particular fine-tuning of parameters or breakdown of perturbation-theoretic control. The leading contribution appears at tree-level , which makes the calculation analytically tractable and removes the loop-suppression as compared to earlier chemical potential studies of non-zero spins. The steady particle production allows us to infer the effective mass of the heavy particles and the chemical potential from the variation in bispectrum oscillations as a function of co-moving momenta. Our analysis sets the stage for generalization to heavy bosons with non-zero spin. 
    more » « less