Abstract Quantum systems have entered a competitive regime in which classical computers must make approximations to represent highly entangled quantum states1,2. However, in this beyond-classically-exact regime, fidelity comparisons between quantum and classical systems have so far been limited to digital quantum devices2–5, and it remains unsolved how to estimate the actual entanglement content of experiments6. Here, we perform fidelity benchmarking and mixed-state entanglement estimation with a 60-atom analogue Rydberg quantum simulator, reaching a high-entanglement entropy regime in which exact classical simulation becomes impractical. Our benchmarking protocol involves extrapolation from comparisons against an approximate classical algorithm, introduced here, with varying entanglement limits. We then develop and demonstrate an estimator of the experimental mixed-state entanglement6, finding our experiment is competitive with state-of-the-art digital quantum devices performing random circuit evolution2–5. Finally, we compare the experimental fidelity against that achieved by various approximate classical algorithms, and find that only the algorithm we introduce is able to keep pace with the experiment on the classical hardware we use. Our results enable a new model for evaluating the ability of both analogue and digital quantum devices to generate entanglement in the beyond-classically-exact regime, and highlight the evolving divide between quantum and classical systems.
more »
« less
Monolithic Integration of Quantum Emitters with Silicon Nitride Photonic Platform
Silicon nitride has great potential for integrated quantum photonics. We demonstrate monolithic integration of intrinsic quantum emitters in SiN with waveguides which show a room-temperature off-chip count rate of ~104counts/s and clear antibunching behavior.
more »
« less
- Award ID(s):
- 2015025
- PAR ID:
- 10348794
- Date Published:
- Journal Name:
- Monolithic Integration of Quantum Emitters with Silicon Nitride Photonic Platform
- Page Range / eLocation ID:
- FW5F.6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Our study evaluates the limitations and potentials of Quantum Random Access Memory (QRAM) within the principles of quantum physics and relativity. QRAM is crucial for advancing quantum algorithms in fields like linear algebra and machine learning, purported to efficiently manage large data sets with$${{{\mathcal{O}}}}(\log N)$$ circuit depth. However, its scalability is questioned when considering the relativistic constraints on qubits interacting locally. Utilizing relativistic quantum field theory and Lieb–Robinson bounds, we delve into the causality-based limits of QRAM. Our investigation introduces a feasible QRAM model in hybrid quantum acoustic systems, capable of supporting a significant number of logical qubits across different dimensions-up to ~107in 1D, ~1015to ~1020in 2D, and ~1024in 3D, within practical operation parameters. This analysis suggests that relativistic causality principles could universally influence quantum computing hardware, underscoring the need for innovative quantum memory solutions to navigate these foundational barriers, thereby enhancing future quantum computing endeavors in data science.more » « less
-
Estimating the volume of a convex body is a central problem in convex geometry and can be viewed as a continuous version of counting. We present a quantum algorithm that estimates the volume of ann-dimensional convex body within multiplicative error ε usingÕ(n3+ n2.5/ε) queries to a membership oracle andÕ(n5+n4.5/ε)additional arithmetic operations. For comparison, the best known classical algorithm usesÕ(n3.5+n3/ε2)queries andÕ(n5.5+n5/ε2)additional arithmetic operations. To the best of our knowledge, this is the first quantum speedup for volume estimation. Our algorithm is based on a refined framework for speeding up simulated annealing algorithms that might be of independent interest. This framework applies in the setting of “Chebyshev cooling,” where the solution is expressed as a telescoping product of ratios, each having bounded variance. We develop several novel techniques when implementing our framework, including a theory of continuous-space quantum walks with rigorous bounds on discretization error. To complement our quantum algorithms, we also prove that volume estimation requiresΩ (√ n+1/ε)quantum membership queries, which rules out the possibility of exponential quantum speedup innand shows optimality of our algorithm in 1/ε up to poly-logarithmic factors.more » « less
-
Abstract Suppressing errors is the central challenge for useful quantum computing1, requiring quantum error correction (QEC)2–6for large-scale processing. However, the overhead in the realization of error-corrected ‘logical’ qubits, in which information is encoded across many physical qubits for redundancy2–4, poses substantial challenges to large-scale logical quantum computing. Here we report the realization of a programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits. Using logical-level control and a zoned architecture in reconfigurable neutral-atom arrays7, our system combines high two-qubit gate fidelities8, arbitrary connectivity7,9, as well as fully programmable single-qubit rotations and mid-circuit readout10–15. Operating this logical processor with various types of encoding, we demonstrate improvement of a two-qubit logic gate by scaling surface-code6distance fromd = 3 tod = 7, preparation of colour-code qubits with break-even fidelities5, fault-tolerant creation of logical Greenberger–Horne–Zeilinger (GHZ) states and feedforward entanglement teleportation, as well as operation of 40 colour-code qubits. Finally, using 3D [[8,3,2]] code blocks16,17, we realize computationally complex sampling circuits18with up to 48 logical qubits entangled with hypercube connectivity19with 228 logical two-qubit gates and 48 logical CCZ gates20. We find that this logical encoding substantially improves algorithmic performance with error detection, outperforming physical-qubit fidelities at both cross-entropy benchmarking and quantum simulations of fast scrambling21,22. These results herald the advent of early error-corrected quantum computation and chart a path towards large-scale logical processors.more » « less
-
Abstract Photodetectors based on colloidal quantum dots (QD)/graphene nanohybrids are quantum sensors due to strong quantum confinement in both QD and graphene. The optoelectronic properties of QD/graphene nanohybrids are affected by the quantum physics that predicts a high photoconductive gain and hence photoresponsivity (R*) depending on the pixel length (L) asR*∝L−2. Experimental confirmation of the effect of the pixel geometric parameters on the optoelectronic properties of the QD/graphene photodetector is therefore important to elucidate the underlying quantum physics. Motivated by this, an array of PbS QDs/graphene nanohybrid photodetectors are designed with variable QD/graphene pixel lengthLand width (W) in the range of 10–150 µm for a study ofR*, noise, and specific detectivity (D*) in a broad spectrum of 400–1500 nm. Intriguingly,R*exhibits a monotonic decreasing trend of 1/L2while being independent ofW, confirming experimentally the theoretical prediction. Interestingly, this geometric effect on the photoresponsivity seems to be partially compensated by that in noise, leading toD*independent ofLandWat wavelengths in the ultraviolet‐visible‐near infrared range. This result sheds light on the quantum physics underlying the optoelectronic process in QD/graphene nanohybrids, which is important to the design of high‐quality QD/graphene photodetectors and imaging systems.more » « less
An official website of the United States government

